Creating a Streaming Chat Application with Django & OpenAI: A Step-By-Step Guide

by Tobias Abdon

Building an AI chat app is one of the funnest projects you can build right now. If you’ve dabbled in Django and JavaScript, building a chat application with OpenAI streaming response can be a delightful challenge. In this blog post, we will walk you through creating a chat application that uses OpenAI to interactively stream responses.

This solution will use Django, django-ninja, and vanilla JavaScript to create streaming chat app. However, there are a couple caveats to this:

  • Rendering the text: currently it just renders as plain text. It does not format code blocks, etc. I do have a version of this that uses React and react-markdown to properly render the streaming output.
  • Undefined: after all the rendering is done from a response, the word ‘undefined’ is added. This is a bug that I didn’t resolve yet.
  • All one one Template: I put all of the HTML and JS code on one template for simplicity sake.

Get Your OpenAI Key

We will use OpenAI’s Embeddings API and GPT-4 API. This requires an OpenAI key. If you don’t have an account, go create one at openai.com. Then go to https://platform.openai.com/account/api-keys to get your API key (create an account if you don’t have one).

I recommend creating a new API to complete this project, and then deleting it when you are done.

Get your API Key

Copy the key somewhere safe for later use.

Install and Configure a Django Project

Before we delve into creating the chat application, the first step is to set up your Django project. Find a directory to work in, and execute these commands. You’ll need to have Python 3.10 or higher installed.

# setup a virtual environment
$ python3.10 -m venv venv

# activate the virtual environment
$ source venv/bin/activate  # On Windows use `venv\\Scripts\\activate`

# install dependencies
$ pip install django django-ninja openai

# create the django project
$ django-admin startproject config

# create an app to work with
$ django-admin startapp chat

Update the Settings File

Next, open up the config/settings.py file to add the OpenAI key and our apps.

In production you’ll want to set the OpenAI key as an environment variable. But here we’re putting the key in the settings file directly.

Add the os import, and define the BASE_DIR and OPENAI_API_KEY settings:

# settings.py

import os

# .. other imports

# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent

# Add the OpenAI key
OPENAI_API_KEY = 'YOUR_OPENAI_API_KEY'

Add the chat app to INSTALLED_APPS:

INSTALLED_APPS = [
    ...
    'chat',
]

Update the TEMPLATES setting (the change from default is the DIRS setting):

TEMPLATES = [
    {
        'BACKEND': 'django.template.backends.django.DjangoTemplates',
        'DIRS': [
            BASE_DIR / 'templates'
        ],
        'APP_DIRS': True,
        'OPTIONS': {
            'context_processors': [
                'django.template.context_processors.debug',
                'django.template.context_processors.request',
                'django.contrib.auth.context_processors.auth',
                'django.contrib.messages.context_processors.messages',
            ],
        },
    },
]

Now that you have successfully installed Django let's move to the next step creating our Django-Ninja API.

Building the Django-Ninja API

Django-Ninja is a web framework for building APIs which is easy to learn and integrates seamlessly with Django. So, once you have your Django project set up, the next step is to build your Django-Ninja API.

Create the api.py file in the chat directory. Here's the code block for our API:

import json
from django.http import StreamingHttpResponse
from ninja import Router
import openai

# To use the OpenAI API key from settings
from django.conf import settings
openai.api_key = settings.OPENAI_API_KEY

router = Router()

@router.get("/stream", tags=_TGS)
def create_stream(request):
    user_content = request.GET.get('content', '')  # Get the content from the query parameter

    def event_stream():
        for chunk in openai.ChatCompletion.create(
            model='gpt-4',
            messages=[{
                "role": "user",
                "content": f"{user_content}. The response should be returned in markdown formatting."
            }],
            stream=True,
        ):
            chatcompletion_delta = chunk["choices"][0].get("delta", {})
            data = json.dumps(dict(chatcompletion_delta))
            print(data)
            yield f'data: {data}\\n\\n'

    response = StreamingHttpResponse(event_stream(), content_type="text/event-stream")
    response['X-Accel-Buffering'] = 'no'  # Disable buffering in nginx
    response['Cache-Control'] = 'no-cache'  # Ensure clients don't cache the data
    return response

Creating Your Django Template

Now that we’ve developed our API, we will create a Django template that will serve as the user interface for our chat application.

First, in the chat directory, create the templates/chat subdirectories. Then, create the chat.html file in chat/templates/chat directory.

Add this code to the chat/templates/chat/chat.html file.

Note: I used Tailwind to style these elements. If you want to see how to configure Django and tailwind, check out django-tailwind.

<div class="flex h-full">

    <section class="relative w-full py-9">
        <!-- CSRF Token -->
        {% csrf_token %}

        <!-- Area for displaying API results -->
        <div id="api-results" class="bg-white pt-16 px-4 site-content prompt-content max-w-full"></div>
        ..
        <!-- Input and Button elements side by side -->
        <div class="flex items-center absolute bottom-0 p-4 w-full bg-gray-50">
            <input type="text" id="input-field" name="user_input" placeholder="Enter something..."/>

            <button id="submit-button" class="bg-blue-500 text-white p-2 rounded relative">
                <div id="spinner" class="hidden absolute top-0 left-0 right-0 bottom-0 bg-blue-500 flex items-center justify-center">
                    <div class="loader"></div>
                </div>
            </button>
        </div>
        ..
    </section>
</div>
<script>
  document.addEventListener("DOMContentLoaded", function () {
    const submitButton = document.getElementById('submit-button');
    const inputField = document.getElementById('input-field');
    const spinner = document.getElementById('spinner');
    const apiResults = document.getElementById('api-results');

    submitButton.addEventListener('click', function (e) {
      e.preventDefault();
      spinner.classList.remove('hidden');  // Show spinner

      // Close any existing EventSource connections
      if (window.source) {
        window.source.close();
      }

      // Initialize the EventSource with the input data as a query parameter
      const content = encodeURIComponent(inputField.value);
      window.source = new EventSource(`/api/ai/stream?content=${content}`);

      window.source.onmessage = function (event) {
        // Handle the streamed data here
        const data = JSON.parse(event.data);
        apiResults.innerHTML += data.content;  // Append the streamed data to the results div
        spinner.classList.add('hidden');  // Hide spinner
      };

      window.source.onerror = function (event) {
        console.error("EventSource failed:", event);
        spinner.classList.add('hidden');  // Hide spinner
        window.source.close();  // Close the EventSource connection
      };
    });
  });
  document.addEventListener("DOMContentLoaded", function () {
    const formatButton = document.getElementById('format-button');
    const apiResults = document.getElementById('api-results');
    const renderer = new marked.Renderer();

    renderer.code = function (code, language) {
      return `<pre><code class="${language || ''}">${code}</code></pre>`;
    };

    marked.setOptions({renderer: renderer, breaks: true,});

    formatButton.addEventListener('click', function () {
      // Get the current content of #api-results
      const currentContent = apiResults.innerHTML;

      // Parse the content using marked.js
      const formattedContent = marked.parse(currentContent);

      // Update the #api-results with the formatted content
      apiResults.innerHTML = `<div class="code-container">${formattedContent}</div>`;
    });
  });

</script>
<style>
    #api-results pre {
        max-width: 100%;
        overflow-x: auto; /* Add horizontal scrollbar if necessary */
    }

    .code-container {
        max-width: 100%; /* Adjust the maximum width as needed */
        overflow-x: auto; /* Add horizontal scrollbar if necessary */
    }

    .code-container pre {
        white-space: pre-wrap; /* Preserve line breaks and wrap text */
    }
</style>

Update the URL File

There are two things we want to do now. First, we will connect our Chat API to the api endpoint. Then, we’ll render the template file at the chat endpoint.

from django.contrib import admin
from django.urls import path
from django.views.generic.base import TemplateView

from ninja import NinjaAPI
from ninja.security import django_auth

from chat.api import router as chat_router

api = NinjaAPI(csrf=True)
api.add_router("/chat/", chat_router, auth=django_auth)

urlpatterns = [
    path('admin/', admin.site.urls),
    path('api/', api.urls),
    path('chat/', TemplateView.as_view(template_name="chat/chat.html"))
]

Test it Out

Now we can test it out. This consists of starting the web development server, and access the /chat/ url.

# make migrations
$ python manage.py makemigrations

# commit migrations
$ python manage.py migrate

# create a super user; complete the prompts
$ python manage.py createsuperuser
Username: 

# start the development server
$ python manage.py runserver

System check identified 3 issues (0 silenced).
October 26, 2023 - 17:31:50
Django version 4.2.5, using settings 'config.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.

Then follow these steps:

  1. Open http://127.0.0.1:8000/admin/ and login with the super user details.
  2. Then open http://127.0.0.1:8000/chat/ and try the chat experience.