AI brings Supreme Court decisions to life

**Artificial Intelligence Meets the U.S. Supreme Court**

The U.S. Supreme Court is an institution steeped in tradition, resistant to quick changes in the way it operates. But like it or not, the justices are about to encounter artificially created versions of themselves—essentially avatars—speaking words they actually spoke in court, though those words were only originally heard by the people present in the courtroom.

Since 1996, Northwestern University professor Jerry Goldman has been pioneering ways to make the Supreme Court more accessible to the public. His nonprofit project, **Oyez**, went live on the internet that year, aiming to provide audio recordings of the court’s oral arguments and opinion announcements for every case decided by the Supreme Court dating back to 1955—the year the court began taping its courtroom proceedings.

### The Importance of the Oyez Project

When the Oyez project debuted, it was a significant breakthrough. Until the early 1990s, the public was largely unaware that the Court had been taping its sessions. Moreover, the preservation of these tapes was chaotic, with many recordings lost forever. Access to the audio was severely limited; no one outside the Court had access until months after the case was heard and decided. Usually, tapes from the previous term became available only at the start of the next one.

Everything changed in 2020 when the COVID-19 pandemic forced the Court to allow live broadcasts of oral arguments. The justices, connected by phone lines, opened up for public listening like never before. Surprisingly, after the pandemic, the Court quietly maintained this system, continuing live audio broadcasts without fanfare.

### Yet, One Key Piece Remains Unavailable

Despite this progress, one crucial part of the Court’s public proceedings remains inaccessible on the same day: the announcements of decisions. These announcements include summaries from the bench by the justices as well as occasional oral dissents.

To this day, only those physically present in the courtroom can hear and witness this drama in real-time. The old system limiting access until the following term remains intact, effectively keeping this vital moment under wraps for months.

### Bringing the Court’s Proceedings to Life with AI

Now, Professor Goldman and his team are experimenting with new ways to recreate these moments of drama—even though official audio remains unavailable for months. Using artificial intelligence, they are creating visual and audio representations of what people in the courtroom saw and heard when decisions were announced.

As Goldman says, *“Since it’s public in the courtroom, it should be public for everybody. That’s simple.”*

### How Are They Creating These Visuals Without Cameras?

With no cameras allowed inside the Supreme Court, Goldman’s new site **On The Docket** uses AI-generated avatars to create the visuals.

University of Minnesota professor Timothy R. Johnson, one of the project’s key architects in collaboration with the AI design company Spooler, reveals the challenges they faced. Early AI attempts proved comical, with bizarre results such as justices disappearing from the bench or all bending forward simultaneously.

Ultimately, the team used photos and videos of the justices from public appearances to craft realistic avatars. These avatars mirror mannerisms, head tilts, and hand gestures, synced precisely with the authentic audio recordings.

### Ethical Considerations

The team confronted ethical questions about how realistic the avatars should appear. Should the video be indistinguishable from reality, or should it clearly indicate that it is AI-generated?

They opted to slightly cartoonize the video and explicitly mark it as AI-generated content. This approach ensures viewers understand that while the audio is authentic, the video is a reconstructed visualization.

### A Notable Example: Chief Justice John Roberts’ Summary

Their first AI-animated visual features Chief Justice John Roberts delivering a 14-minute summary from the bench. This was for the Supreme Court’s 6-to-3 decision granting former President Trump—and all future former presidents—complete immunity from prosecution for any official acts taken while in office, no matter how controversial.

Following Roberts is Justice Sonia Sotomayor, presenting her dissent. Together, their passionate spoken words compose a riveting and somewhat eerie 38-minute sequence.

### Resistance from the Court

The Court is likely not pleased with this new interpretation of public access. Historically, the Court resisted sharing recordings of its oral arguments and announcements. Before 1993, these recordings were secret.

Law professor Peter Irons once signed a pledge to keep them confidential but published a book with dubbed cassettes of important oral arguments. The Court sued him but later dropped the case, apparently conceding defeat.

Since then, oral arguments are regularly broadcast—especially due to the pandemic—but bench announcements remain locked away until months after decisions are made.

### Calls for Greater Transparency

Reporters and scholars have long requested live broadcasts of opinion announcements. Professor Goldman notes that even papers from the early Warren Court era in the 1950s show the justices considered recording oral arguments and opinion announcements—not to keep them secret.

However, all such requests have been met with silence from the Court, and even AI technology cannot yet bridge this gap for live audio or video.

**In summary**, AI is offering a new window into Supreme Court proceedings by recreating moments once hidden from public view. While the Court remains cautious about full transparency, projects like Oyez and On The Docket are pushing the boundaries toward a more accessible and engaging understanding of America’s highest court.
https://www.npr.org/2026/02/11/nx-s1-5711607/supreme-court-ai

Exit mobile version
Sitemap Index