Video Summary
This demo app shows:
- How to use LangChain's YoutubeLoader to retrieve the caption in a YouTube video
- How to ask Llama 3 to summarize the content (per the Llama's input size limit) of the video in a naive way using LangChain's stuff method
- How to bypass the limit of Llama 3's 8k context length limit by using a more sophisticated way using LangChain's
refineandmap_reducemethods - see here for more info
We start by installing the necessary packages:
- youtube-transcript-api API to get transcript/subtitles of a YouTube video
- langchain provides necessary RAG tools for this demo
- tiktoken BytePair Encoding tokenizer
- pytube Utility for downloading YouTube videos
Let's first load a long (2:47:16) YouTube video (Lex Fridman with Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI) transcript using the YoutubeLoader.
You should see 142689 returned for the doc character length, which is about 30k words or 40k tokens, beyond the 8k context length limit of Llama 3. You'll see how to summarize a text longer than the limit.
Note: We will be using Replicate to run the examples here. You will need to first sign in with Replicate with your github account, then create a free API token here that you can use for a while. You can also use other Llama 3 cloud providers such as Groq, Together, or Anyscale - see Section 2 of the Getting to Know Llama notebook for more info.
If you'd like to run Llama 3 locally for the benefits of privacy, no cost or no rate limit (some Llama 3 hosting providers set limits for free plan of queries or tokens per second or minute), see Running Llama Locally.
Next you'll call the Llama 3 70b chat model from Replicate because it's more powerful than the Llama 3 8b chat model when summarizing long text. You may also try Llama 3 8b model by replacing the model name with "meta/meta-llama-3-8b-instruct".
Once everything is set up, you can prompt Llama 3 to summarize the first 4000 characters of the transcript.
Note: The context length of 8k tokens in Llama 3 is roughly 6000-7000 words or 32k characters, so you should be able to use a number larger than 4000.
You can try a larger text to see how the summary differs.
If you try the whole content which has over 142k characters, about 40k tokens, which exceeds the 8k limit, you'll get an empty result (Replicate used to return an error "RuntimeError: Your input is too long.").
To fix this, you can use LangChain's load_summarize_chain method (detail here).
First you'll create splits or sub-documents of the original content, then use the LangChain's load_summarize_chain with the refine or map_reduce type.
Because this may involve many calls to Llama 3, it'd be great to set up a quick free LangChain API key here, run the following cell to set up necessary environment variables, and check the logs on LangSmith during and after the run.
The refine type implements the following steps under the hood:
- Call Llama 3 on the first sub-document to generate a concise summary;
- Loop over each subsequent sub-document, pass the previous summary with the current sub-document to generate a refined new summary;
- Return the final summary generated on the final sub-document as the final answer - the summary of the whole content.
An example prompt template for each call in step 2, which gets used under the hood by LangChain, is:
Your job is to produce a final summary.
We have provided an existing summary up to a certain point:
<previous_summary>
Refine the existing summary (only if needed) with some more content below:
<new_content>
Note: The following call will make 33 calls to Llama 3 and generate the final summary in about 10 minutes. The complete log of the the calls with inputs and outputs is here.
You can also set chain_type to map_reduce to generate the summary of the entire content using the standard map and reduce method, which works behind the scene by first mapping each split document to a sub-summary via a call to LLM, then combines all those sub-summaries into a single final summary by yet another call to LLM.
Note: The following call takes about 3 minutes and all the calls to Llama 3 with inputs and outputs can be traced here.
One final chain_type you can set is stuff, but it won't work with large documents because it stuffs all the split documents into one and uses it in a single prompt which exceeds the Llama 3 context length limit.