With most large language models being run on remote, cloud-based server farms, some users have been reluctant to share personally identifiable and/or private data with AI companies. In its WWDC keynote today, Apple stressed that the new "Apple Intelligence" system it's integrating into its products will use a new "Private Cloud Compute" to ensure any data processed on its cloud servers is protected in a transparent and verifiable way.
"You should not have to hand over all the details of your life to be warehoused and analyzed in someone's AI cloud," Apple Senior VP of Software Engineering Craig Federighi said.
Trust, but verify
Part of what Apple calls "a brand new standard for privacy and AI" is achieved through on-device processing. Federighi said "many" of Apple's generative AI models can run entirely on a device powered by an A17+ or M-series chips, eliminating the risk of sending your personal data to a remote server.
Read 4 remaining paragraphs | Comments
Ars Technica - All contentContinue reading/original-link]