Revolutionizing GPT-4: Enhancing Responsiveness with Streaming

Category Artificial intelligence, Generative AI

In the realm of cutting-edge technology, GPT-4 represents a remarkable milestone in the field of natural language processing. However, like any advanced tool, it faces certain challenges, one of which is response time. Fortunately, we’ve harnessed the power of streaming to elegantly address this challenge, significantly enhancing GT-4’s responsiveness. In this post, we will delve into how we’ve leveraged streaming to overcome slow response times and provide users with a smoother and more interactive experience.

The Challenge: GPT-4’s Response Time

While GPT-4 is highly advanced, it can sometimes take a noticeable amount of time to generate responses. This delay can pose issues for applications that rely on real-time interactions. Imagine waiting several seconds for a chatbot to respond — clearly not an ideal user experience. So, the question arises: How can we make GPT-4 faster without compromising its quality? The solution lies in the concept of streaming.

Introducing Streaming

Streaming can be likened to watching a video while it’s still downloading — you don’t need to wait for the entire content to load before you start enjoying it. Similarly, with GPT-4, instead of waiting for the entire response to be generated, we begin sending chunks of the response as soon as they become available. This approach allows users to see meaningful content on their screens while the remaining response is being processed in the background.

Why Streaming Matters

By implementing streaming, we’ve made substantial improvements to the user experience when interacting with GPT APIs. Users no longer have to endure the frustration of waiting for a complete response. They can engage with the content as it’s being generated. Whether it’s a chatbot, content recommendation system, or any other application powered by GPT-4, this approach ensures that interactions feel seamless, responsive, and engaging.

Setting Up GPT-4 and Streaming

Before we delve into the code, let’s cover the prerequisites:

1. Install Dependencies: Ensure that you have all the necessary dependencies installed. You can utilize libraries like OpenAI’s npm package (openai) for integrating GPT-4.

2. GPT-4 API Key: Obtain an API key from OpenAI to authenticate your requests.

Here, we are using a pre-deployed model. To learn more about this, refer this article, Exploring LLM Platforms and Models: Unpacking OpenAI, Azure, and Hugging Face

With these prerequisites in place, you’re ready to proceed.

Below is a code example illustrating how the streaming approach is implemented to handle real-time responses from GPT-4. This mechanism enables the generation of dynamic content, making applications like on-the-fly question generation and instant response evaluation possible without keeping the user waiting:

// Import necessary dependencies

// Define a function to handle AI response streaming
public getAIStreamResponse = async (messages, res: Response, payload) => {
    try {
        let stream;

        // Create a chat completion request with streaming enabled
        const completion: any = await openai.createChatCompletion({
            model: 'gpt-4', // Specify the GPT-4 model
            messages, // Pass in the conversation messages
            stream: true, // Enable streaming for real-time responses
            temperature: +process.env.INTERVIEW_PREP_OPEN_AI_MODEL_TEMPERATURE, // Set the model temperature
        }, {
            responseType: 'stream', // Specify the response type as 'stream'
        });

        stream = completion.data; // Get the streaming data

        // Resolve and handle response chunks using the 'resolveResponseChunks' function
        const data = await this.resolveResponseChunks(stream, res, payload);

        return data;
    } catch (error) {
        // Handle any errors and log them
        this.logger.error(error);
        throw error;
    }
}

public resolveReponseChunks = (stream, res: Response, payload) => {
    let tableData = '';
    return new Promise<string>((resolve, reject) => {
        let completeResponse = '';
        res.setHeader('Content-Type', 'text/html; charset=UTF-8');
        res.setHeader('Transfer-Encoding', 'chunked');
        stream.on('data', (chunk) => {
            // Decoding and parsing data
            const decodedChunk = new TextDecoder().decode(chunk);
            const lines = decodedChunk.split('\n');
            const parsedLines = lines
                .map((line) => line.replace(/^data: /, '').trim())
                .filter((line) => line !== '' && line !== '[DONE]')
                .map((line) => JSON.parse(line));
            for (const parsedLine of parsedLines) {
                // Processing parsed data
                const { choices } = parsedLine;
                const { delta } = choices[0];
                const { content } = delta;
                if (content) {
                    // Displaying dynamic content
                    tableData += content;
                    completeResponse += content;
                    res.write(content);
                }
            }
        });
        stream.on('end', () => {
            // Completing the response
            res.end();
            resolve(completeResponse);
        });
    });
}

Initialization and Response Headers:

This function is defined as resolveReponseChunks, and it takes three arguments: streamres, and payload. It returns a promise that resolves to a string. Inside the function, variables tableData and completeResponse are initialized to empty strings. The res object represents the HTTP response that will be sent to the client. Response headers are set, specifying the content type and transfer encoding for the streaming response.

Streaming Data Event Handling:

The stream object, responsible for streaming data from GPT-4, is configured to listen for the ‘data’ event. As data chunks arrive, each chunk is decoded using TextDecoder, converted into lines, and then processed. The function establishes event handlers to process data as it arrives in chunks from the stream. It uses the on method to listen for the ‘data’ and ‘end’ events of the stream.

data’ Event Handling:

When data chunks arrive (stream.on('data', (chunk) => { ... })), the function performs the following tasks:

  1. Decodes the received chunk using TextDecoder().decode(chunk) to convert it into a readable string.
  2. Splits the decoded chunk into lines using split('\n'), assuming that each line corresponds to a message from the GPT-4 model.
  3. Processes each line by:
  • Removing the ‘data: ‘ prefix and trimming any leading or trailing whitespace using line.replace(/^data: /, '').trim().
  • Filtering out lines that are empty or contain ‘[DONE]’, as these lines typically indicate the end of the response.
  • Parsing each remaining line as a JSON object using JSON.parse(line).

Decoding and Parsing Data:

Each line is first processed to remove any ‘data:’ prefix and unnecessary whitespace using replace and trim functions. Lines containing the text ‘[DONE]’ are filtered out since they signal the end of the response. The remaining lines are parsed as JSON objects using JSON.parse.

Processing Parsed Data:

The parsed data is an array of objects, and within these objects, the relevant content generated by GPT-4 is extracted. This content typically includes textual responses.

Dynamic UI Update:

If there is content present, it is added to both the tableData and completeResponse strings. Additionally, the content is written to the response object using res.write(content). This step ensures that the content is sent to the client as soon as it becomes available, creating a real-time and dynamic user experience.

Streaming End Event Handling:

When the ‘end’ event is triggered by the stream, it signifies that all data chunks have been received. At this point, the response is finalized by ending the response object using res.end(). The completeResponse is then resolved as part of the promise, making the accumulated content available for further use.

Example of streaming response, integrated in one of our projects.

Conclusion

Using streaming technology has completely changed how GPT-4 responds to users, making it faster and more interactive. Instead of waiting for the entire answer, users now get a quick and continuous flow of information. This makes using GPT-4 much smoother and engaging, as you don’t have to wait for everything to be finished before getting a response. It’s like having a conversation that flows naturally and quickly, thanks to streaming technology. This is a big leap forward in how we use and interact with GPT.

Author: Ankit Kumar Jha

Ready to embark on a transformative journey? Connect with our experts and fuel your growth today!