Apple releases second wave of Intelligence features via new developer betas

by · Six Colors

Apple Intelligence just keeps on coming.

The first batch of features in Apple’s much-hyped entry into the artificial intelligence boom will be released to the general public sometime next week, but the company is already moving on to the next one.

On Wednesday, Apple rolled out developer betas of iOS 18.2, iPadOS 18.2, and macOS 15.2, which run Apple Intelligence features previously seen only in Apple’s own marketing materials and product announcements: Three different kinds of image generation, ChatGPT support, Visual Intelligence, expanded English language support, and Writing Tools prompts.

Three kinds of image generation

Apple’s suite of image-based generative AI tools including Image Playground, Genmoji, and Image Wand, will be put in the hands of the public for the first time. When it introduced these features back at WWDC in June, Apple said they were intended to enable creation of fun and playful images that are shared amongst family and friends, which is one reason the company has eschewed the generation of photorealistic images, instead opting for the use of a couple different styles that it dubs “animation” and “illustration.”

Custom-generated emoji with Genmoji will provide several options based on a user’s prompt, and allow the resulting images not only to be sent as a sticker but also inline or even as a tapback. (One could, just as an example, ask for a “rainbow-colored apple” emoji.) It can also create emoji based on the faces in the People section of your Photos library. Genmoji creation is not supported on the Mac yet.

Image Playground is a straight-up image generator, but with some interesting guardrails. The feature will offer concepts to choose from to kick off the process, or you can just type a description of what sort of image you want. Like Genmoji, Image Playground can use people from your Photo library to generate images based on them. It can also use individual images from Photos to create related imagery. The images that are created conform to certain specific, non-photographic styles such as Pixar-style animation or hand-drawn illustration.

Image Wand allows users to turn a rough sketch into a more detailed image. It works by selecting the new Image Wand tool from the Apple Pencil tools palette and circling a sketch that needs an A.I. upgrade. Image Wand can also be used to generate pictures from whole cloth, based on the text around it.

Of course, image generation tools open a potential can of worms for creating content that may be inappropriate, a risk that Apple is attempting to combat in a number of ways, including limiting what types of materials the models are trained upon, as well as guardrails on what type of prompts will be accepted—for example, it will specifically filter out attempts to generate images involving nudity, violence, or copyrighted material. In cases where an unexpected or worrying result is generated—a risk with any model of this type—Apple is providing a way for that image to be reported directly within the tool itself.

Third party developers will also get access to APIs for both Genmoji and Image Playground, allowing them to integrate support for those features into their own apps. That’s particularly important for Genmoji, as third-party messaging apps won’t otherwise be able to support the custom emoji that users have created.

Give Writing Tools commands

The update also adds some more of the text input, free-association flair frequently connected to large language models. For example, Writing Tools—which in the first-wave feature release mostly let you tap on different buttons to make changes to your text—now has a custom text input field. When you select some text and bring up Writing Tools, you can tap to enter text to describe what you want Apple Intelligence to do to modify your text. For example, I could have selected this paragraph and then typed “make this funnier.”

Along with the developer beta, Apple’s also rolling out a Writing Tools API. That’s important because while Writing Tools are available throughout apps that use Apple’s standard text controls, a bunch of apps—including some of the ones I use all the time!—use their own custom text-editing controls. Those apps will be able to adopt the Writing Tools API and gain access to all the Writing Tools features.

Here’s ChatGPT, if you want it

This new wave of features also includes connectivity with ChatGPT for the first time. That will include the ability for Siri queries to be passed to ChatGPT, which will happen dynamically based on the type of query, for example, asking Siri to plan a day of activities for you in another city. Users will not only be initially prompted upon installing the beta to enable the ChatGPT integration, but also asked again when the query is made. That integration can also be disabled within Settings, or you can opt to have the per-query prompt removed. In certain cases you might get additional prompts to share specific kinds of personal data with ChatGPT—for example, if your query would also upload a photograph.

Apple says that by default, requests sent to ChatGPT are not stored by the service or used for model training, and that your IP address is hidden so that different queries can’t be linked together. While a ChatGPT account isn’t required for using the feature, you can opt to log into a ChatGPT account, which provides more consistent access to specific models and features. Otherwise, ChatGPT will itself determine which model it uses to best respond to the query.

If you’ve ever tried out ChatGPT for free, you’ll know that the service has some limitations in terms of models used and the number of queries that you’re allowed in a given time. It’s interesting to note that the use of ChatGPT by Apple Intelligence users isn’t infinite—if you use it enough, you will probably run into usage limitations. It’s unclear if Apple’s deal with ChatGPT means that those limits are better for iOS users than for randos on the ChatGPT website, though. (If you do pay for ChatGPT, you’ll be held to the limits on your ChatGPT account.)

Visual Intelligence on iPhone 16 models

For owners of iPhone 16 and iPhone 16 Pro models, this beta will also include the Visual Intelligence feature first showed off at the debut of those devices last month. (To activate it, you press and hold the Camera Control button to launch Visual Intelligence, then aim the camera and press the button again.) Visual Intelligence then looks up information about what the camera is currently seeing, such as the hours of a restaurant you’re standing in front of or event details from a poster, as well as translate text, scan QR codes, read text out loud, and more. It can also optionally use ChatGPT and Google search to find more information about what it’s looking at.

Support for more English dialects

Apple Intelligence debuted with support only for U.S. English, but in the new developer betas that support has become very slightly more worldly. It’s still English-only for now, but English speakers in Canada, the United Kingdom, Australia, New Zealand, and South Africa will be able to use Apple Intelligence in their versions of English. (Support for English locales for India and Singapore are forthcoming, and Apple says that support for several other languages—Chinese, French, German, Italian, Japanese, Korean, Portuguese, Spanish, and Vietnamese among them—are also forthcoming in 2025.)

What’s next?

As part of these developer betas, Apple is collecting feedback on the performance of its Apple Intelligence features. The company plans to use that feedback not only to improve its tools but also to gauge when they might be ready to roll out to a larger audience. We definitely get the sense that Apple is treading as carefully as it can here while also rushing headlong into its artificial-intelligence future. It knows there are going to be quirks when it comes to AI-based tools, and that makes these beta cycles even more important in terms of shaping the direction of the final product.

Obviously there will be many more developer betas, and ultimately public betas, before these .2 releases go out to the general public later this year. And there are still a bunch of announced Apple Intelligence features that are yet to come, most notably a bunch of vital new Siri features, including support for Personal Context and in-app actions using App Intents. Today marks the next step in Apple Intelligence, but there’s still a lot of road left for Apple to walk.—Jason Snell and Dan Moren

If you appreciate articles like this one, support us by becoming a Six Colors subscriber. Subscribers get access to an exclusive podcast, members-only stories, and a special community.