How to Use GPT for Generating Creative Content with Hugging Face Transformers

How to Use GPT for Generating Creative Content with Hugging Face Transformers


GPT, short for Generative Pre-trained Transformer, is a family of transformer-based language models. Known as an example of an early transformer-based model capable of generating coherent text, OpenAI's GPT-2 was one of the initial triumphs of its kind, and can be used as a tool for a variety of applications, including helping write content in a more creative way. The Hugging Face Transformers library is a library of pretrained models that simplifies working with these sophisticated language models.

The generation of creative content could be valuable, for example, in the world of data science and machine learning, where it might be used in a variety of ways to spruce up dull reports, create synthetic data, or simply help to guide the telling of a more interesting story. This tutorial will guide you through using GPT-2 with the Hugging Face Transformers library to generate creative content. Note that we use the GPT-2 model here for its simplicity and manageable size, but swapping it out for another generative model will follow the same steps.

Setting Up the Environment

Before getting started, we need to set up our environment. This will involve installing and importing the necessary libraries and importing the required packages.

Install the necessary libraries:

pip install transformers torch

Import the required packages:

from transformers import AutoModelForCausalLM, AutoTokenizer  import torch

You can learn about Huging Face Auto Classes and AutoModels here. Moving on.

Loading the Model and Tokenizer

Next, we will load the model and tokenizer in our script. The model in this case is GPT-2, while the tokenizer is responsible for converting text into a format that the model can understand.

model_name = "gpt2"  model = AutoModelForCausalLM.from_pretrained(model_name)  tokenizer = AutoTokenizer.from_pretrained(model_name)

Note that changing the model_name above can swap in different Hugging Face language models.

Preparing Input Text for Generation

In order to have our model generate text, we need to provide the model with an initial input, or prompt. This prompt will be tokenized by the tokenizer.

prompt = "Once upon a time in Detroit, "  input_ids = tokenizer(prompt, return_tensors="pt").input_ids

Note that the return_tensors='pt' argument ensures that PyTorch tensors are returned.

Generating Creative Content

Once the input text has been tokenized and prepared for input into the model, we can then use the model to generate creative content.

gen_tokens = model.generate(input_ids, do_sample=True, max_length=100, pad_token_id=tokenizer.eos_token_id)  gen_text = tokenizer.batch_decode(gen_tokens)[0]  print(gen_text)

Customizing Generation with Advanced Settings

For added creativity, we can adjust the temperature and use top-k sampling and top-p (nucleus) sampling.

Adjusting the temperature:

gen_tokens = model.generate(input_ids,                               do_sample=True,                               max_length=100,                               temperature=0.7,                               pad_token_id=tokenizer.eos_token_id)  gen_text = tokenizer.batch_decode(gen_tokens)[0]  print(gen_text)

Using top-k sampling and top-p sampling:

gen_tokens = model.generate(input_ids,                               do_sample=True,                               max_length=100,                               top_k=50,                               top_p=0.95,                               pad_token_id=tokenizer.eos_token_id)  gen_text = tokenizer.batch_decode(gen_tokens)[0]  print(gen_text)

Practical Examples of Creative Content Generation

Here are some practical examples of using GPT-2 to generate creative content.

# Example: Generating story beginnings  story_prompt = "In a world where AI contgrols everything, "  input_ids = tokenizer(story_prompt, return_tensors="pt").input_ids  gen_tokens = model.generate(input_ids,                               do_sample=True,                               max_length=150,                               temperature=0.4,                               top_k=50,                               top_p=0.95,                               pad_token_id=tokenizer.eos_token_id)  story_text = tokenizer.batch_decode(gen_tokens)[0]  print(story_text)    # Example: Creating poetry lines  poetry_prompt = "Glimmers of hope rise from the ashes of forgotten tales, "  input_ids = tokenizer(poetry_prompt, return_tensors="pt").input_ids  gen_tokens = model.generate(input_ids,                               do_sample=True,                               max_length=50,                               temperature=0.7,                               pad_token_id=tokenizer.eos_token_id)  poetry_text = tokenizer.batch_decode(gen_tokens)[0]  print(poetry_text)


Experimenting with different parameters and settings can significantly impact the quality and creativity of the generated content. GPT, especially the newer versions of which we are all aware, has tremendous potential in creative fields, enabling data scientists to generate engaging narratives, synthetic data, and more. For further reading, consider exploring the Hugging Face documentation and other resources to deepen your understanding and expand your skills.

By following this guide, you should now be able to harness the power of GPT-3 and Hugging Face Transformers to generate creative content for various applications in data science and beyond.

For additional information on these topics, check out the following resources:

  • Hugging Face Transformers Documentation
  • PyTorch Documentation
  • Generative Pre-trained Transformer (Wikipedia)

Matthew Mayo (@mattmayo13) holds a Master's degree in computer science and a graduate diploma in data mining. As Managing Editor, Matthew aims to make complex data science concepts accessible. His professional interests include natural language processing, machine learning algorithms, and exploring emerging AI. He is driven by a mission to democratize knowledge in the data science community. Matthew has been coding since he was 6 years old.

More On This Topic

  • How to Fine-Tune BERT for Sentiment Analysis with Hugging Face Transformers
  • How to Use Hugging Face AutoTrain to Fine-tune LLMs
  • Surpassing Trillion Parameters and GPT-3 with Switch Transformers -…
  • Training BPE, WordPiece, and Unigram Tokenizers from Scratch using…
  • Top 10 Machine Learning Demos: Hugging Face Spaces Edition
  • A community developing a Hugging Face for customer data modeling
Follow us on Twitter, Facebook
0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments

Latest stories

You might also like...