Apple’s latest research about running large language models on smartphones offers the clearest signal yet that the iPhone maker plans to catch up with its Silicon Valley rivals in generative artificial intelligence.
The paper, entitled “LLM in a Flash,” offers a “solution to a current computational bottleneck,” its researchers write.
Its approach “paves the way for effective inference of LLMs on devices with limited memory,” they said. Inference refers to how large language models, the large data repositories that power apps like ChatGPT, respond to users’ queries. Chatbots and LLMs normally run in vast data centers with much greater computing power than an iPhone.
Read 15 remaining paragraphs | Comments
Ars Technica - All contentContinue reading/original-link]