🤖

WWDC26 - Apple ML & AI Frameworks Overview

Jun 13, 2025

BLUF

This session introduces the machine learning (ML) and AI frameworks available on Apple platforms, highlighting new features, APIs, and tools for integrating, optimizing, and deploying ML models and Apple Intelligence in apps.

Platform Intelligence and Integration

  • ML and AI power features like Optic ID, handwriting recognition, and background noise removal across Apple devices.
  • Large foundation models drive system-wide generative features such as Writing Tools, Genmoji, and Image Playground.
  • Standard UI components gain Apple Intelligence features automatically; custom components can add them with minimal code.

ML-Powered APIs and New Features

  • ImagePlayground framework’s ImageCreator class enables programmatic image generation using text prompts and styles (iOS 18.4+).
  • Smart Reply API lets apps donate conversation context for generating message/email suggestions using on-device models.

Foundation Models Framework

  • Provides optimized on-device language models for summarization, extraction, classification, and more in iOS 26.
  • Simple three-line code integration; user data stays private and offline with no added costs or API keys.
  • Supports structured output via Guided Generation, directly filling app data structures.
  • Tool calling extends model knowledge with live/personal data and can trigger actions or cite sources.

Specialized ML Frameworks

  • Vision: Now features document recognition and lens smudge detection for improved image analysis.
  • Speech: New SpeechAnalyzer API and improved models support flexible, efficient speech-to-text, ideal for long-form audio.
  • Natural Language, Translation, and Sound Analysis: APIs for text, language, and audio tasks, customizable via Create ML.

Bringing Custom Models to Device

  • Core ML framework deploys models in Core ML format with device-optimized performance and integrations in Xcode.
  • coremltools offers model conversion, optimization, and compression for improved speed and lower memory usage.
  • BNNS Graph and MPS Graph/Metal frameworks provide low-level ML and pre-/post-processing options.

Exploring ML Research and Open Source Tools

  • MLX: Apple’s open-source array framework for efficient ML on Apple Silicon, leveraging unified memory.
  • Easily run, fine-tune, or train models like Mistral and DeepSeek-R1 via MLX and community resources.
  • MLX supports multiple languages (Python, Swift, C++) and works alongside PyTorch and Jax via Metal.

Key Terms & Definitions

  • Apple Intelligence — System-wide ML-powered features integrated into Apple devices and apps.
  • Foundation Model — Large pre-trained AI models specialized for versatile language and generative tasks.
  • Core ML — Apple’s framework for on-device model deployment and optimized inference.
  • ImagePlayground — Framework enabling generative image features within apps.
  • Tool Calling — Mechanism allowing LLMs access to app, system, or online tools for enhanced responses.
  • MLX — Open-source numerical/ML computing library for Apple Silicon with unified memory support.

Action Items / Next Steps

  • Explore the “Meet the Foundation Models framework,” “Reading documents using the Vision Framework,” and “Bring advanced speech-to-text to your app with SpeechAnalyzer” sessions.
  • Check “Bring your models to Apple Silicon” and “What’s new in BNNS Graph” for technical deep-dives.
  • Review documentation on integrating Smart Reply and other APIs.
  • Visit developer.apple.com and the Developer app to access resources, community forums, and further reading.