The limited ability of the current iterations of Large Language Models (LLMs) to comprehend increasing loads of context remains one of the biggest impediments at the moment to achieving AI singularity - a threshold at which artificial intelligence demonstrably exceeds human intelligence. At first glance, the 200K-token context window for Anthropic's Claude 2.1 LLM appears impressive. However, its context recall proficiency leaves much to be desired, especially when compared with the relatively robust recall abilities of OpenAI's GPT-4. Our new model Claude 2.1 offers an industry-leading 200K token context window, a 2x decrease in hallucination rates, system prompts, tool use, […]
Read full article at https://wccftech.com/new-research-anthropic-claude-2-1-llm-remains-inferior-to-openai-gpt-4-at-context-recall/
WccftechContinue reading/original-link]