Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Sheverack, Roksolana | - |
dc.date.accessioned | 2021-07-13T13:14:46Z | - |
dc.date.available | 2021-07-13T13:14:46Z | - |
dc.date.issued | 2021-06-17 | - |
dc.identifier.uri | http://hdl.handle.net/2451/62802 | - |
dc.description | Best of Showcase Paper | en |
dc.description.abstract | The project's goal is to explore the field of natural language processing, particularly the use of a generative pre-trained transformer (GPT) to produce poetry. In piloting the project, New York University's School of Professional Studies (NYUSPS) and the Master of Science in Management and Systems (MASY) sought to determine the effect of changing the characteristics of the training sets on the nature of the text generated by generative pre-trained transformer model. Presenting the University with the opportunity to lead the conversation on ways industries may seek to leverage this technology. The project entails two major components, a research component of identifying a generative pre-trained transformer model and a technical part of re-training the selected language model on a custom dataset. During the research, the team developed selection criteria to help assess the availability and functionality of several generative pre-trained transformer models. Within the technical component, the project set out to investigate using the literary work of two contemporary poets Shakespeare and Donne, as training sets and other seed poetry to evoke responses from the selected GPT language model. The project team utilized GPT-2 Simple, a fine-tuned version of OpenAI's GPT-2, transformer-based language model to perform a set of experiments. GPT-2 Simple was re-trained with three custom datasets, Shakespeare's sonnets unbiased, a combination of Shakespeare and Donne's sonnets, and Shakespeare's sonnets biased. In utilizing GPT-2 Simple, the project team had the opportunity to gain an in-depth understanding of the architecture and the Python code used to train the language model. The team performed 30 experiments, receiving a total of 150 text outputs. The gathered outputs will allow the project sponsor to further explore the impact of artificial intelligence in the generation of intellectual property within the present-day publishing and fine-arts industries. | en |
dc.language.iso | en_US | en |
dc.rights | Author Retains All Rrights | en |
dc.subject | OpenAI, GPT-2, Transformer-Based Language Model, Generative Pre-Trained Transformer | en |
dc.title | Modern-Day Shakespeare: Training Set Experiments with a Generative Pre-Trained Transformer - Best Paper | en |
Appears in Collections: | MASY Student Research Showcase 2021 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
R.Sheverack - Final Project Report - Modern-Day Shakespeare:Training Set Experiments with a Generative Pre-Trained Transformer.pdf | Abstract The project's goal is to explore the field of natural language processing, particularly the use of a generative pre-trained transformer (GPT) to produce poetry. In piloting the project, New York University's School of Professional Studies (NYUSPS) and the Master of Science in Management and Systems (MASY) sought to determine the effect of changing the characteristics of the training sets on the nature of the text generated by generative pre-trained transformer model. Presenting the University with the opportunity to lead the conversation on ways industries may seek to leverage this technology. The project entails two major components, a research component of identifying a generative pre-trained transformer model and a technical part of re-training the selected language model on a custom dataset. During the research, the team developed selection criteria to help assess the availability and functionality of several generative pre-trained transformer models. Within the technical component, the project set out to investigate using the literary work of two contemporary poets Shakespeare and Donne, as training sets and other seed poetry to evoke responses from the selected GPT language model. The project team utilized GPT-2 Simple, a fine-tuned version of OpenAI's GPT-2, transformer-based language model to perform a set of experiments. GPT-2 Simple was re-trained with three custom datasets, Shakespeare's sonnets unbiased, a combination of Shakespeare and Donne's sonnets, and Shakespeare's sonnets biased. In utilizing GPT-2 Simple, the project team had the opportunity to gain an in-depth understanding of the architecture and the Python code used to train the language model. The team performed 30 experiments, receiving a total of 150 text outputs. The gathered outputs will allow the project sponsor to further explore the impact of artificial intelligence in the generation of intellectual property within the present-day publishing and fine-arts industries. | 5.82 MB | Adobe PDF | View/Open |
Items in FDA are protected by copyright, with all rights reserved, unless otherwise indicated.