top of page
A Parochial Scene (Law & Order written by AI text generator)

This short video was created for the 2019 Performance Studies International (PSi) conference held in Calgary, Canada. As part of the Artistic Research Working Group, I was assigned another artist's work, to which I responded.

​

I responded to the work of Sharon Kahanoff. The text presented below the video was shared at PSi on July 5, 2019.

​

In brief, this clip from Law & Order (Season 3, Episode 5 "Blue Bamboo") features subtitles generated by Max Woolf's Textgenrnn (text generating recurring neural network program). I fed 4 seasons' worth of L&O scripts into the machine learning program and the newly generated text was inserted into this clip in the order it was generated.

​

Audio by Sharon Kahanoff (used with permission). Copyright of L&O: Wolf Films, Universal Television (1994). 

I found online scripts from Law & Order (well, subtitles at least). I hand-scraped 4 seasons of dialogue from L&O and uploaded it into a text generation model I found online, made by Max Woolf that uses Google's TensorFlow on the backend. The program studies the probability of each character occurring and it learns to replicate the style of writing. Shakespeare is often used as a training input for text generators.

​

New dialogue was generated. Here are some sample lines of text generated by the software. Courtroom vocabulary, character names, and certain themes do recur:

​

I thought maybe we'd get all the time.

I wouldn't take it to the court? When I was going to talk to him.

She wanted to leave him and he was sick.

I don't understand why you could have been there? 

I said I would.

I have to indict her to the consenting to what we got the victim on the store.

I went back to protect neighbors.

And that's not a very clear address.

I want the crimes against her.

She was already convinced.

I don't know what I'm talking to you? No.

The one you made me walk around for parochial scene.

I don't know what you're talking about.

We don't know what I mean.

I don't have any idea who it's to get a favor.

​

Next I decided to try putting the generated text into an episode, in the exact order they were produced. I chose an existing episode somewhat arbitrarily. I added in audio from one of the videos generously provided by Sharon.

​

This whole process got me thinking about mistakes. What is a mistake? The text generating software only knows how close it can come to imitation. I (hopefully) learn something from every mistake I make. Does a recurring neural network have any conception of error? If it runs, it aims for verisimilitude of the input, therefore it can at best repeat, or, reorder what exists. It has no taste. No sense of humour, and, it certainly does not really know how to tell the stories of: 

the people, represented by two separate yet equally important parts, the police who investigate crime, and the district attorney’s office who prosecute the offenders.

​

You can try Max Woolf's Textgenrnn here

bottom of page