Anthropic’s LLM, Claude, just got a substantial upgrade: its context window has been expanded from 9K tokens to an impressive 100K tokens, equating to around 75,000 words. This expansion is significant, allowing businesses to submit hundreds of pages for Claude’s analysis, which enables more extended and in-depth dialogues.
While the average person might take more than five hours to read 100,000 tokens – and much longer to comprehend and analyze that information – Claude can perform this task in less than a minute. As a compelling proof-of-concept, Anthropic loaded the entirety of The Great Gatsby into Claude and asked it to identify a single altered line; Claude accomplished the task in just 22 seconds.
Claude’s functionality isn’t limited to digesting lengthy documents. It can also act as an intelligent assistant, retrieving critical information from an array of documents or books. This ability to synthesize knowledge across multiple text portions is a significant advantage over traditional vector search approaches.
What can businesses do with 100K context windows? The answer: “more.” You can analyze larger and more complex documents such as financial statements, annual reports, legislation, documentation, entire codebases, etc.
This announcement about Claude’s ability to ingest and analyze large documents is news, but it won’t be for long. The arms race for token dominance has just begun.
To understand why, please consider enrolling in our free online course, Generative AI for Executives. It will help you get to the future first.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.