Categories
NLP

Make a Copy of Jasper AI with GPT3 and Steamship

Generative AI is a rapidly growing field of artificial intelligence that is used to generate novel outputs based on learning from data. It has been used to create exciting new works of art, literature, and music, and is revolutionizing the way we create content. In this post, we’re going to create a copy of Jasper AI/Copy AI, generative AI article writers.

In this example, we not only create our own generative AI copywriter, we also put it up online. You can try out the demo for this project at Steamship and find the source code on GitHub. The only package that you need to complete this project is Steamship and it can be installed via pip install steamship. Video Tutorials Here:

Create Your Own Jasper AI/Copy AI Clone Tutorial

We cover:

  • Prompt Engineering for Your Own Copy of Jasper AI/Copy AI
    • Create a Class to Interact with GPT-3
    • Prompt GPT to Generate an Outline
    • Generate Some Talking Points with GPT
    • Engineering a Prompt to Generate an Article with GPT-3
    • Processing Responses from GPT-3
    • Stringing it All Together for a Simple Version of Jasper AI/Copy AI
    • Premium Jasper AI/Copy AI Features via the Command Line
  • Put Your Copy of Jasper AI/Copy AI Online
  • Summary of Creating a Jasper AI/Copy AI Copy with GPT3 and Steamship
Simple Copy AI Clone Deployed to Steamship

Prompt Engineering for Your Own Copy of Jasper AI/Copy AI

With generative AI and large language models (LLMs) we are no longer bound by the need to gather data, train a network, and host it online. Instead, we can access LLMs created and hosted by companies like OpenAI or Google to do our inference for us. Prompt Engineering is the practice of creating the right prompts to get the desired output from Large Language Models like GPT-3.

To start off our code, we need to import the relevant packages shown in the code below. The typing package is helpful for type hints. It’s 2023, so it’s time to start writing typed Python.

from steamship import Steamship
from steamship.invocable import post, PackageService
import typing

Create a Class to Interact with GPT-3

The first thing we do is create a class to interact with GPT-3. The code below shows the example functions in the class as well as a helper function that we use to process the response. Before we fill in the generative AI functionality, we set up some configurations for interacting with GPT-3.

We use two configs in this example. First, one for generating points – section headers and talking points. Second, one for generating an article. The two keys we define for each file are the max number of words, max_words, and the creativity level, temperature. GPT-3 can handle up to 4001 tokens (words) including the prompt. We use 150 for our points and 2500 for the whole article. Temperature is measured from 0 to 1, with 1 being more creative and 0 being less creative.

The first function we create to copy Jasper AI/Copy AI is one to generate an outline. This function takes a title, a list of tags, and a tone. Next, we create a function to generate talking points for each of these headers. It takes a header and a tone. With the outline and talking points in hand, we create the article. The function to create the article takes a title, a dictionary of headers to talking points, and a tone to write in. 

Outside of the class, we create one more function to help us. I have chosen to keep it outside the function. In a production context, I would put this into another file. Our helper function takes an input of a string response from GPT-3 and returns a list. We use this when we are requesting the outline and talking points. The responses to those prompts come back from GPT-3 as a string separated by numbered points.

from steamship import Steamship
from steamship.invocable import post, PackageService
import typing


class PromptPackage(PackageService):
 POINTS_LLM_CONFIG = {
   "max_words": 150,
   "temperature": 0.75
 }
 ARTICLE_LLM_CONFIG = {
   "max_words": 2500,
   "temperature": 0.5
 }
  # generate
 @post("generate")
 def generate(self, title: str, tone:str, tags: str) -> str:
   """Generates an article using title, tone, and a comma
   separated list of tags input as a string"""
   ...


 # generate section headers
 @post("generate_outline")
 def generate_outline(self, title: str, keywords: typing.List[str], tone: str) -> str:
   """Generates an outline based on title, keywords, and tone supplied by user"""
   ...


 # generate talking points for each section header
 @post("generate_talking_points")
 def generate_talking_points(self, header: str, tone: str) -> str:
   """Generates three talking points for the outline"""
   ...


 # generate article
 @post("generate_article")
 def generate_article(self, title: str, header_point_map: dict, tone: str) -> str:
   """Generates three talking points for the outline"""
   ...


def response_to_list(response: str) -> typing.List[str]:
 """turn a response string into a list of string, used for GPT3 response
 when asking for a list of items"""
 ...
# Try it out locally by running this file!
if __name__ == "__main__":
 ...

Prompt GPT to Generate an Outline

In this tutorial, we skip the top generate function and jump into its components. The first function we implement is the one to generate an outline. This function takes three parameters other than self. First, we need to supply the title of our article. Second, a list of keywords. Third, the tone that we want the article written in.

With these three variables, we are now equipped to prompt GPT-3 for an article outline. The prompt we use in this example is “Generate three headers for a technical article titled {title} focused on {“, “.join(keywords)} with a {tone} tone.”

You can play around with the prompt to change the output for the article headline. For example, if you want more sections, you can change “three headers” to “four headers” or “five headers”. If you want a different type of article you can change “technical article” to “journalistic article” or simply “article”.

Once we have a prompt written, we just need to ping the LLM with our prompt. We initialize an LLM using the configuration for “points” that we wrote above and call it to get our response. I use clean_output=False in the call to ensure that we get the full response from GPT-3. The default of clean_output=True could lose some part of the response if it does not end in punctuation.

# generate section headers
 @post("generate_outline")
 def generate_outline(self, title: str, keywords: typing.List[str], tone: str) -> str:
   """Generates an outline based on title, keywords, and tone supplied by user"""
   prompt = f"""Generate three headers for a technical article titled {title}
     focused on {", ".join(keywords)} with a {tone} tone"""


   points_llm = self.client.use_plugin("gpt-3",  "points", config=self.POINTS_LLM_CONFIG)
   return points_llm.generate(prompt, clean_output=False)

Generate Some Talking Points with GPT

For each section header, we need some talking points. The next function we create prompts GPT-3 to create these talking points. This function only needs two parameters, the header and the tone we want to write the header in. The prompt that we use is ”Generate three short talking points about {header} with a {tone} tone”.

Once again we can edit the prompt to get a different output. Then, we do the same thing we did to generate an outline. First, spin up a client to ping GPT-3. Second, generate a response.

# generate talking points for each section header
 @post("generate_talking_points")
 def generate_talking_points(self, header: str, tone: str) -> str:
   """Generates three talking points for the outline"""
   prompt = f"""Generate three short talking points about {header} with a {tone} tone"""


   points_llm = self.client.use_plugin("gpt-3", "points", config=self.POINTS_LLM_CONFIG)


   return points_llm.generate(prompt, clean_output=False)

Engineering a Prompt to Generate an Article with GPT-3

At this point we have:

  • The title of the article
  • The tone of the article
  • Three headers
  • Three talking points for each header

The next step is to generate an article from all this information that we’ve already generated. This function takes a title, a map of headers to talking points, and a tone to prompt GPT-3 for an article. We cover how to turn the talking points into a map in the generate function and local CLI tool. 

The process to create a prompt for an article with all this information is more complex than the two processes we looked at above. First, we join all the headers together. Second, we create a paragraph that puts all the headers with their talking points together. With these sentences created, we create our prompt.

We prompt GPT-3 to create a technical article with the title and tone passed in. The headers we strung together along with the talking points related to them come next. Finally, we tell the LLM that the article should also include an introduction and conclusion paragraph and not repeat any sentences.

# generate article
 @post("generate_article")
 def generate_article(self, title: str, header_point_map: dict, tone: str) -> str:
   """Generates three talking points for the outline"""
   sections = ", ".join(header_point_map.keys())


   sentences = []
   for header, points in header_point_map.items():
       sentences.append(f"The section {header} should be about {', '.join(points)}.")
   sentences = " ". join(sentences)


   prompt = f"""Generate a technical article with title {title} in a {tone} tone.
   The article should have three sections: {sections}. {sentences} The article should also
   include an introduction paragraph and a conclusion paragraph. Do not repeat any sentences."""


   points_llm = self.client.use_plugin("gpt-3", "article", config=self.ARTICLE_LLM_CONFIG)
   return points_llm.generate(prompt)

Processing Responses from GPT-3

Each of the responses from GPT-3 come as a string, even when we ask for a list of points. To separately edit and process each point returned by the LLM, we split the string into a list of strings. This process starts with an empty list and an empty string.

We loop through each character in the returned string from GPT-3 and add that character to the initially empty string as long as it’s not a digit or a period. When GPT-3 returns the string of responses in a list of points, it starts each point with “1.”, “2.”, or “3.”, and we need to get rid of these starters. Then we strip the string before adding it to the response list to get rid of any extra spaces. If there is a string leftover, we also append that cleaned string to the list.

def response_to_list(response: str) -> typing.List[str]:
 """turn a response string into a list of string, used for GPT3 response
 when asking for a list of items"""
 response_list = []
 item = ""
 for char in response:
   if not char.isdigit() and char != '.':
     item += char
   if char == "\n" and item.strip() != "":
     response_list.append(item.strip())
     item = ""
 if item != "":
   response_list.append(item.strip())
 return response_list

Stringing it All Together for a Simple Version of Jasper AI/Copy AI

With all of these functions done, we can string them together to create an article just from the title, the tone of the article, and some tags. There are two options for intaking tags, as a string or as a list of strings. We intake them as a single string for this demo. The first thing we do is parse that string into a list.

Now we’re ready to use the functions we filled in earlier to generate this article. First, create the headers. We use the passed in title, parsed tags, and tone to get a list of points from GPT-3. Next, process that string into a list of strings to use it for the next step. With the list of headers in hand, we loop through each of them to generate talking points for each header and store that in a dictionary. Finally, we generate and return the article.

# generate
@post("generate")
 def generate(self, title: str, tone:str, tags: str) -> str:
   """Generates an article using title, tone, and a comma
   separated list of tags input as a string"""
   # initialize
   tags = [tag.strip() for tag in tags.split(",")]


   headers = self.generate_outline(title, tags, tone)
   headers_list = response_to_list(headers.strip())


   # create a map of headers to paragraphs
   # turn each header into a section via talking points
   header_talking_points_map = {}
   for header in headers_list:
     talking_points = self.generate_talking_points(header, tone)
     header_talking_points_map[header] = response_to_list(talking_points)


   # generate and save article
   article = self.generate_article(title, header_talking_points_map, tone)
   return article

Premium Copy AI Features via the Command Line

What if we want to customize our article as the pieces are being generated? We can run the program locally. To run the program locally, we start up a temporary workspace from Steamship and run the package with that as the client. The process is similar to the generate function.

First, we prompt the user for the title, the tone, and a list of tags separated by commas. We process the tags by splitting the string into a list and then call the header creation function. This is where the functionality splits. Before we move on, we print out the headers and ask if the user would like to edit them. 

We let the user edit the headers as much as they like before going on to generate the talking points for each header. As the talking points are generated, we also ask the user if they would like to edit any of the generated talking points. After the user has been allowed to edit the talking points, we go on to use the edited (or not) headers and talking points to create the article. Finally, instead of returning the article, we save it to a text file.
Find the full code to this project here on GitHub.

# Try it out locally by running this file!
if __name__ == "__main__":
 with Steamship.temporary_workspace() as client:
   # initialize
   gen_ai = PromptPackage(client)
   title = input("What do you want to title your article? ")
   tone = input("What is the tone of your article? ")
   tags = input("Enter up to 5 tags for your article (separated by commas) ")
   tags = [tag.strip() for tag in tags.split(",")]


   headers = gen_ai.generate_outline(title, tags, tone)
   headers_list = response_to_list(headers.strip())


   # print headers and ask for edits
   print(headers.strip())
   edit = input("Would you like to edit the headers? (Y/n) ")
   while edit.lower() == "y":
     num_header = int(input("Which header would you like to edit? "))
     print(f"Current header: {headers_list[num_header-1]}")
     new_header = input("Please re-write the header as you desire\n")
     headers_list[num_header-1] = new_header
     edit = input("Would you like to continue editing the headers? (Y/n) ")
    print("Generating Talking Points using ...")
   for header in headers_list:
     print(header)


   # create a map of headers to paragraphs
   # turn each header into a section via talking points
   header_talking_points_map = {}
   for header in headers_list:
     talking_points = gen_ai.generate_talking_points(header, tone)
     header_talking_points_map[header] = response_to_list(talking_points)
     print(f"\033[1;32mHeader: {header}\nPoints:\033[0;37m")
     for point in header_talking_points_map[header]:
       print(point)
     edit = input("Would you like to edit any points? (Y/n) ")
     while edit.lower() == "y":
       points = header_talking_points_map[header]
       # get the point to be edited
       num_point = int(input("Which point would you like to edit? "))
       print(header_talking_points_map[header][num_point-1])
       new_point = input("Please re-write the point as you desire\n")
       points[num_point-1] = new_point
       edit = input("Would you like to continue editing the headers? (Y/n) ")
  
   # inform user that article is being generated
   print("Generating Article using ...")
   for header, points in header_talking_points_map.items():
     print(f"\033[1;32mHeader: {header}\nPoints:\033[0;37m")
     for point in points:
       print(point)


   # generate and save article
   article = gen_ai.generate_article(title, header_talking_points_map, tone)


   # generate and save article
   with open(f"{title}.txt", "w") as f:
       f.write(article)

Put Your Copy of Copy AI Online

Deploy Your Copy AI Clone to the Cloud via Steamship

Steamship makes it surprisingly simple to put your copy of Jasper AI/Copy AI online. We can do it with one command – ship deploy. Once deployed, you get the link to your version and you can try out your own demo online! Try out the demo for this project here at Copy AI Clone!

Summary of Creating a Jasper AI/Copy AI Copy with GPT3 and Steamship

Copy AI costs $36 a month for the pro version. Jasper AI costs $59 for the basic version and $499 for businesses. You can now create your own version and use it for free! Steamship allows us to not only access GPT-3 easily, but also host our completed package online.

Our Copy AI/Jasper AI clone consisted of three generative functions. One to generate headers, one to generate talking points, and one to create an article. We used a fourth function to process the string responses and a fifth to string it all together to demo online. 

Related Works

I run this site to help you and others like you find cool projects and practice software skills. If this is helpful for you and you enjoy your ad free site, please help fund this site by donating below! If you can’t donate right now, please think of us next time.