WWDC: Apple Intelligence Brings Generative AI to Mail, Messaging and More

Apple is bringing generative AI to Siri and throughout iOS 18, iPad OS 18 and macOS Sequoia, the company announced at the Apple Worldwide Developers Conference on June 10.

As was rumored, Apple partnered with OpenAI to link ChatGPT to iOS, iPadOS and macOS. However, Apple Intelligence will be an independent generative AI service hosted on its own servers.

Adding generative capabilities to Siri has the advantage of familiarity, and partnering with OpenAI means Apple won’t have to do all of the work itself. Apple is betting on generative AI being a seamless addition to the way the Apple ecosystem may already organize a consumer’s or professional’s life.

Apple Intelligence will be available in beta this fall in iOS 18, iPadOS 18 and macOS Sequoia. It will work on iPhone 15 Pro, iPhone 15 Pro Max and some other current iPad and Mac devices with M1 or newer chips. ChatGPT assistance for Apple Intelligence will follow later in 2024.

Siri will respond to natural language

The popular voice assistant is catching up to the generative AI world with Apple Intelligence, which enables Siri to:

  • Understand more natural-sounding questions and speech patterns, including understanding pauses or corrections.
  • Remember previous conversations and references.
  • Answer questions about Apple devices.
  • Take actions inside apps on your behalf.

You can now type to talk to Siri, double tapping the bottom of the screen to make natural-language requests.

Apple didn’t show flashy demos like OpenAI and Google did, but they may be betting on more consumers finding uses for generative AI through what they already do with Siri.

Enhancements to Siri with Apple Intelligence will roll out over the course of the next year. Developers will be able to define how Siri with Apple Intelligence will be able to interact with their apps.

What is Apple Intelligence?

During the AI reveal at WWDC, Apple put the emphasis on “personal intelligence” and privacy. Most of the generative AI processes used for Apple Intelligence will be performed on-device on the A17 or M-series chips and in a proprietary cloud.

Craig Federighi, Apple senior vice president of software engineering, emphasized personal context in the WWDC keynote.

“We’re tremendously excited about the power of generative models … But these tools know very little about you and your needs,” he said.

In response, Apple Intelligence on iOS 18, iPad 18 and macOS Sequoia will use AI for “personal context” to make day-to-day tasks faster. The iPhone can prioritize and summarize notifications with the “priority notifications” feature, selecting the most important ones first.

SEE: Concerns about ultra-powerful artificial intelligence lead some within Silicon Valley to seek a ‘Right to Warn’ for AI project whistleblowers.

What else does understanding personal context mean? Apple Intelligence can determine what data you think is most relevant and reference related content. For example, if your meeting is rescheduled, you can check whether the new time will prevent you from getting to your daughter’s school event. Answering this question requires Apple Intelligence to search and reason across your personal and work calendars, email and Apple Maps.

New writing tools with generative AI will be available in Mail, Notes, Pages, Keynote and third party apps. Apple Intelligence can write and summarize emails, too.

Apple Intelligence can take action between apps to carry out tasks on your behalf. For example, you can ask it to take action based on natural questions like “Play the podcast that my wife sent the other day.”

Apple Intelligence lets consumers create custom emojis easily in apps like Messages. Notes, Pages, Keynote and Freeform will all gain generative AI images. Apple intelligence is able to identify and recreate pictures of people in your photo library,” such as creating an image of them for their birthday.

Private Cloud Compute

Sometimes, Apple’s AI processes will need to be performed on external servers. To hopefully prevent privacy and security problems around this, Apple will run these processes on Private Cloud Compute, a home for Apple Intelligence that are still thoroughly within the Apple ecosystem and use Apple’s own Swift programming language. Even Apple can’t see that data, the company claims.

“You should not have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud,” said Federighi.

ChatGPT will be integrated with Siri

In addition to Apple Intelligence, Apple is partnering with OpenAI to bring ChatGPT and GPT-4o to Siri, which will be used when referencing current information or gathering information that isn’t on your devices.

Siri will ask for permission to reach out to ChatGPT. You can do this for free or connect to your personal ChatGPT subscription. This integration is coming later this year. Plus, Apple plans to add support for other AI models.

Where Apple’s AI fits in to the competition

Apple has seemingly chosen to stay out of the generative AI chatbot race for the last few years, although it published research on running large language models on mobile devices in December 2023. The relatively late entry means Apple has had time to watch the rest of the industry try to find practical use cases for generative AI. It’s also seen Microsoft switch its Recall AI feature from default to opt-in after security concerns.

Will Apple face the same backlash and security digging as Microsoft’s Recall? And is Apple’s ecosystem so powerful that its email summarization and other Apple Intelligence capabilities will prove as game-changing as AI companies predict generative AI content to be? Is Apple playing it safe when it comes to AI? It’s too early to say, but time will tell whether Apple Intelligence might be opt-in — or transformative — going forward.

Apple coders, rejoice! Your programming tools just got a big, free AI boost

craig

Craig Federighi, Apple Senior Vice President of Software Engineering

Apple today announced AI additions to its Xcode development environment, aiming to increase the productivity of programmers building apps across Apple's product line.

For those of you who aren't programmers, let's take a moment to discuss just what a development environment does. To do this, a good analogy is a chef's kitchen.

Also: Everything Apple announced at WWDC 2024, including iOS 18, Siri, AI, and more

A baker's kitchen, for example, will be different from one focused on low-carb cooking. A baker's kitchen may revolve around a stand mixer, and have an assortment of racks for cooling and preparation. There would be ample storage for flour, sugar, and other baking staples.

A kitchen focused on low-carb cooking would have equipment like spiralizers, chopping gadgets, sous-vide machines, and an air fryer or two. The storage focus would be the challenge of finding fridge and counter storage for fresh fruits and vegetables, as well as lean proteins.

Each of these work environments is tailored to the individual needs and working style of the person doing the work, customized with certain commonly used tools, and even optimized for reducing steps.

A programmer's development environment, whether it be Xcode for Apple development, Visual Studio for Microsoft applications, or PhpStorm (my primary coding environment) for building web applications, is also an environment that can be tailored to its user's needs.

Also: I put GPT-4o through my coding tests and it aced them — except for one weird result

Coders work on screen and define our "floor space" by the arrangement of windows and panes on that screen. We also have "major appliances," except instead of a stove and refrigerator, we have an editor and a debugger. Many of us carefully arrange our windows and panes to save steps, and often save different layouts of tools depending on what stage of coding we're involved in at the time.

Let's belabor our kitchen analogy a bit more. How many of us, growing up, helped out mom and dad by preparing the food, perhaps chopping up the vegetables or cleaning up or doing the dishes? When we were helping out, we were not the "chef," but instead very valued helpers (even if we did sneak a morsel here or there when a parent seemed like they weren't watching).

In the case of a coding environment, the AI additions are like these kitchen helpers. AI is nowhere near ready to go out and build a major application. But it can undertake numerous small and often tedious tasks that are part of the coding process. In the past year, I've used AI numerous times to help out my coding and I'm convinced I saved a month or more delegating the creation and analysis of small subroutines to the AI.

Apple's version of AI is called Apple Intelligence. At the very end of the keynote, Craig Federighi announced a number of key Apple Intelligence-powered features for Xcode, Apple's development environment.

Also: New VisionOS 2 features unveiled at WWDC: What I'm excited about (and puzzled by)

First, he discussed how Apple Intelligence has been built into developer SDKs. An SDK is a software developer kit, effectively a way for developers to incorporate pre-existing OS technology into their apps.

Continuing our kitchen analogy, think of SDKs as roughly analogous to meal kits. Federighi talked about incorporating Image Playground (Apple's text-to-image AI feature) into developer apps with just a few lines of code.

That's kind of like how a cook might make a meal by opening up the meal kit and just incorporating all the ingredients to come up with a delicious dinner. In the case of the meal kit, the kit developers did all the work in figuring out the ingredients, selecting and providing them, and creating the recipes and instructions. In the case of an SDK, the SDK developer did all the work in figuring out the technology (like text to image), and providing that to the app developers.

Any apps that use the standard editable text view for creating text gain full access to Apple Intelligence writing tools (summaries, etic).

Also: How LinkedIn's free AI course made me a better Python developer

Siri has also been upgraded with Apple Intelligence. Developers who use SiriKit (the SDK for Siri) will gain Siri-based enhancements for features like lists, notes, media, messaging, payments, restaurant reservations, VolP calling, and workouts.

Likewise, Apple is adding to its App Intents functionality. These are predefined actions or tasks that apps can perform, allowing them to integrate seamlessly with Siri and other system features to enhance user interactions and automation. Federighi stated that Apple is enhancing intents with Apple Intelligence capabilities in the following categories: Books, browsers, Cameras, document readers, file management, Journals, Mail, Photos, presentations, spreadsheets, whiteboards, and word processors.

These allow developers to easily add new AI functionality without much additional work, and certainly without the full investment in AI that was necessary to create the features to begin with.

In terms of the coding process itself, Apple announced that it's adding generative intelligence to Xcode. Specifically, it will provide on-device code completion (writing small chunks of code for a developer) for the Swift language. It's interesting that he used the term "code completion" instead of writing code, because code completion implies a much more controlled process simply extending and clarifying code production. Full code writing would involve telling the AI to write a module with a given spec, and from this announcement, it's not clear Xcode will do that.

Also: How to use ChatGPT to write code: What it can and can't do for you

Xcode will be available to answer questions for Swift developers. This can save a ton of time. Developers can ask how to code specific SDK calls (for example, "how can I add image playground here?"). Presumably (again, not specified in the keynote) developers could also ask the AI "what does this code do" and get a more detailed explanation.

Generative AI in development environments is a fairly new thing, and environment producers as well as individual developers are still learning where generative AI can be a helpful new power tool or where it becomes something that just gets in the way.

We came a long way in the last year, and my bet is that by WWDC 2025, this feature set will seem rudimentary because we've all learned a lot more about how AI can help coding.

You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Featured

What is Apple Intelligence? How the iPhone’s on-device and cloud-based AI works

Apple Macbook with OS 15

Apple is expected to have one of its most groundbreaking Worldwide Developers Conference (WWDC) keynotes today, as the company plans to add major artificial intelligence (AI) features to its operating systems.

But rather than unveiling a slew of flashy generative AI features to knock your socks off, Apple is expected to focus on incorporating AI into its apps to simplify users' daily tasks. And it'll categorize such features under the name "Apple Intelligence."

Also: The best Apple deals of June 2024: iPhones, Apple Watches, iPads, and more

According to Bloomberg, Apple is branding its AI features under Apple Intelligence — and we didn't miss the snarky wordplay. Apple Intelligence will include the latest AI features coming to its operating systems, including iOS, iPadOS, MacOS, and WatchOS.

Apple Intelligence focuses on broad-appeal AI features rather than advanced image and video generation. To do this, the company developed in-house AI models and partnered with OpenAI to power a chatbot that will work similarly to ChatGPT.

Also: Apple to give Control Center its most useful customization feature ever with iOS 18

Some of the biggest AI features we're expecting with Apple Intelligence include:

  • Improved photo editing, like object removal, with AI in Photos.
  • Greater Siri control over apps and actions, including asking Siri to delete emails or edit photos.
  • AI generation of custom emojis based on text prompts.
  • Generating quick recaps of notes, text messaging threads, emails, and more texts.
  • Automatically suggesting responses for emails and messages.
  • An improved Mail app that can categorize emails and generate messages.
  • Automatic transcription of voice memos.
  • AI enhancements for Xcode to auto-complete code.

Aside from these AI features, Bloomberg reports that iOS 18 will include new customizable icons and interface updates for Control Center, Settings, and Messages. Apple is also expected to launch a new Passwords app to replace the iCloud Keychain and give users a more user-friendly option, similar to 1Password and LastPass.

Also: Apple to unveil 'Passwords' manager app at WWDC 2024: What it is and how it works

On-device vs. cloud-based

Though Apple was rumored to work on different ways to keep its AI running strictly on-device for security and privacy, Apple Intelligence is expected to rely on the cloud for at least some tasks. This will depend on device complexity, resource availability, data privacy considerations, and latency requirements.

Essentially, if a task is simple enough to be processed locally, leveraging the device's processing power and battery life, and requires immediate results, it is more likely to be handled on-device. Tasks involving sensitive data could also prioritize on-device processing, as Apple tries to prioritize data privacy.

Also: ChatGPT privacy tips: Two important ways to limit the data you share with OpenAI

In turn, cloud-based AI processing requires sending data from the device to remote servers that can handle complex or computationally heavy tasks. In Apple's case, tasks requiring processing large amounts of data or updated models could include advanced natural language processing (NLP), intricate analysis, and complex image and video generation.

Depending on its complexity and system requirements, an algorithm will determine whether a task requiring AI should be processed on-device or offloaded to the cloud. Simpler tasks like a Siri request and other basic NLP tasks can be processed on-device. More complex tasks, like generating a detailed summary of a large document, will be sent to the cloud, where more robust processing can occur.

Technical requirements for Apple Intelligence

According to Bloomberg, Apple's new AI features will be compatible with the latest Apple devices, including iPhone 15 Pro or newer models, which run on an A17 Pro chip, and iPads and Macs with an M1 chip or newer. While these AI features may help drive sales of new iPhones and Macs, as a current iPhone 14 Pro Max owner, I hope that at least some will trickle down to older iPhone models. We'll know what the official compatibility list is come WWDC.

Also: Apple Photos app is getting an AI-powered editing feature to wipe out photobombers

During the event, Apple is expected to highlight new security measures for running AI tasks, including chip-based security in data centers for cloud-based processing. It will also reiterate its commitment not to build user profiles based on consumer data.

Perhaps most importantly, users can opt-in for Apple Intelligence features, which will be introduced as beta versions as Apple works to improve its AI capabilities over time.

Featured

Internet: Lessons for AI safety and alignment from pharmaceutical regulations

abstract digital technology background, artificial intelligence, generative, ai

The pharmaceutical industry is among the most regulated industries in the United States for the safety and efficacy of therapies. Yet, there are other approaches at the source, aside from regulatory efforts.

Narcan (naloxone) is a medication that reverses opioid overdose, in an example of using drugs to fight the effect of drugs. There are several approved medications to counter the side effects of other approved medications.

Simply, the biological targets that make medications therapeutically useful can also make them work otherwise—or have unwanted effects. This made it necessary not just to have external efforts for compliance, but to have a sort of internally mechanized regulation, with medication-on-medication, for the health and safety of individuals.

AI safety, alignment and regulation would probably have to follow this architecture against risks. There could be AIs that are monitoring other AIs and their outputs within jurisdictions towards achieving approximate safety.

There are several levels of AI threats and risks. However, the ones that are here, now—deepfake videos and images, voice cloning and impersonation, misinformation and disinformation—may require technical combats, beyond just laws, to be effective.

Some people have said that large language models are nothing, yet the harm they cause, for victims and loved ones, shows otherwise. Approaching regulation outside of technical generality may be limited.

In a recent report by The NYTimes, States Take Up A.I. Regulation Amid Federal Standstill, it was stated that, “State lawmakers across the country have proposed nearly 400 new laws on A.I. in recent months, according to the lobbying group TechNet. California leads the states with a total of 50 bills proposed, although that number has narrowed as the legislative session proceeds. Colorado recently enacted a comprehensive consumer protection law that requires A.I. companies use “reasonable care” while developing the technology to avoid discrimination, among other issues. In March, the Tennessee legislature passed the ELVIS Act (Ensuring Likeness Voice and Image Security Act), which protects musicians from having their voice and likenesses used in A.I.-generated content without their explicit consent.”

Computer viruses, bots and bugs were not regulated without fighting them at the source. AI is not the energy industry, neither is it the biotechnology or the airline industry. These industries are based in the physical world, not the digital world—which is more malleable. Doing harm with AI does not require moving things, too sophisticated evasion and so forth, like other industries. AI may also not leave a trail. There are several industries and their products that people can avoid for long stretches of time, but no industry is currently as dominant as the internet—and by extension, digital.

This imperium, for social and productivity applications, makes the human mind process digital outputs like the physical world. A technical approach to AI safety and alignment could be web crawling for indexed AI tools, and then web scraping for their tokens, to track some of the outputs that are used for harm.

The US AI Safety Institute could be a point of contact for states on technical options, especially on how to have jurisdiction AIs against other AIs, as well as to build safe-intent, against existential risks.

Apple Now Lets Siri Access ChatGPT

OpenAI Apple

The wait is finally over. Apple has announced its partnership with OpenAI to integrate ChatGPT powered by GPT-4o into iOS, as revealed at Apple’s Worldwide Developer Conference 2024. This would allow Siri to access ChatGPT whenever the user wants to access its intelligence.

“very happy to be partnering with apple to integrate chatgpt into their devices later this year! think you will really like it,” said OpenAI chief Sam Alman.

very happy to be partnering with apple to integrate chatgpt into their devices later this year!
think you will really like it.

— Sam Altman (@sama) June 10, 2024

This partnership will enable users to access ChatGPT’s capabilities, including image and document understanding, directly within iOS, iPadOS, and macOS without needing to switch between tools.

The integration will allow users of Siri to leverage ChatGPT’s intelligence when beneficial. Users will be prompted before any questions, documents, or photos are sent to ChatGPT, with Siri presenting the answer directly.

Additionally, ChatGPT will be incorporated into Apple’s system-wide Writing Tools to assist users in generating content.

ChatGPT will be available for free without requiring an account, and user information will not be logged.

Apple says Siri will now be able to tap into ChatGPT’s “expertise” when needed. For example, if you need menu ideas for a meal using ingredients from your garden, you can ask Siri. After receiving your permission, Siri will send the prompt to ChatGPT and provide you with the suggestions.

yes i would like to use @ChatGPTapp for that pic.twitter.com/YY4SHwsiy1

— Steven Heidel (@stevenheidel) June 10, 2024

The AI will also provide image generation tools to complement written content. Users can also include photos with their questions. For instance if you want some advice on decorating, you can take a picture and ask, “What kind of plants would go well on this deck?” Siri confirms if it’s okay to share your photo with ChatGPT and brings back relevant suggestions.

Privacy protections are built in when accessing ChatGPT within Siri and Writing Tools—requests are not stored by OpenAI, and users’ IP addresses are obscured. Users can also choose to connect their ChatGPT account, which means their data preferences will apply under ChatGPT’s policies.

ChatGPT integration will be coming to iOS 18, iPadOS 18, and macOS Sequoia later this year.

The post Apple Now Lets Siri Access ChatGPT appeared first on AIM.

LLM Spotlight: Falcon

The Falcon family of large language models (LLMs) – developed by the Technology Innovation Institute (TII) in Abu Dhabi – demonstrate impressive capabilities. Falcon LLMs span across a wide variety of parameter sizes, as well as two generations:

  • Falcon 1.3B with 1.3 billion parameters
  • Falcon 7.5B with 7.5 billion parameters
  • Falcon 40B with 40 billion parameters
  • Falcon 180B with 180 billion parameters
  • Falcon 180B Instruct with 180 billion parameters

Falcon 180B is one of the larger LLMs in the industry and was trained on a dataset of over 3.5 trillion tokens from publicly available sources. Conversely, the Falcon 40B model was trained on around 1 trillion tokens. These smaller models work better for those with computational and memory requirements or those who are worried that large models might overfit training data. The “Instruct” model is specifically fine-tuned to better follow human instructions, making it well-suited for interactive applications like chatbots.

Additionally, TII has released the Falcon 2 series in the following parameter sizes:

  • Falcon 2 11B with 11 billion parameters
  • Falcon 2 11B VLM (Vision-to-Language) with 11 billion parameters

The Falcon 2 11B model is a more efficient and accessible version compared to previous iterations and is trained on 5.5 trillion tokens. In fact, TII has stated that Falcon 2 11B surpasses the performance of Meta’s Llama 3 8B and performs on par with Google’s Gemma 7B. Falcon 2 models also have multilingual capabilities in English, French, Spanish, German, and more.

Falcon 2 11B VLM is notable in that it is TII’s first multimodal model and can convert visual inputs into text. Many LLMs have struggled with multimodal capabilities, and the Falcon 2 line is part of a new generation of LLMs to tackle this problem. What’s more, both Falcon 2 models run efficiently on a single GPU.

In the near future, Falcon 2 models will receive improvements like "Mixture of Experts” – a sophisticated machine learning feature. By combining smaller networks with discrete specializations, this approach makes sure that the most competent areas work together to provide complex and tailored solutions. It’s like having a group of knowledgeable assistants that collaborate to forecast or make judgments as necessary. Each assistant has a unique area of expertise.

Finally, one of the larger changes to the Falcon 2 series is the open-source approach. Original Falcon models came with some licensing restrictions. However, Falcon 2 models are released under a permissive open-source license, which gives developers worldwide unrestricted access to the tool.

Live updates: Everything Apple announced at WWDC 2024, including iOS 18, Siri, AI, and more

Apple WWDC 2024 badge

Apple's 2024 Worldwide Developer Conference (WWDC) is already shaping up to be one of the company's biggest events in decades. The opening keynote, which is taking place right now, is focused almost entirely on the buzzword we can't stop talking about — artificial intelligence.

Trailing behind major players like OpenAI, Google, and Microsoft, Apple today is unveiling a slew of AI features spread across the company's most popular operating systems. While AI is the event's main focus, Apple executives are also expected to announce this year's software upgrades for the iPhone, iPad, Apple Watch, Mac, and Vision Pro.

If you can't tune in for the two-hour-long event, ZDNET has you covered. Here's a complete breakdown of all the announcements at WWDC as they come.

New hand controls in VisionOS 2

VisionOS 2

  • Apple unveiled the first major upgrade to its recently released VisionOS: VisionOS 2.

  • In Vision OS 2, Photos gets an upgrade that allows users to create Spatial Photos with added depth from photos already in their camera rolls.

  • Spatial Personas in the Photos app lets users view photos together, creating a more shared experience.

  • VisionOS 2 also supports new hand motion commands, allowing users to access some settings more easily. For example, users can open their hands and tap to reach the home screen or turn their wrists to see the battery level.

  • Users who mirror their MacOS to their Vision Pro will soon get new sizes, including ultrawide monitor view.

  • The Vision Pro will also include train support for travel mode, making working during your commute easier.

  • Developers will be able to create Spatial Apps with more ease due to new frameworks and APIs.

  • Apple is partnering with Blackmagic to make it easier to make Immersive Videos.

iOS 18

  • iPhone and iPad users will be able to customize their home screen further by placing apps wherever they'd like on the screen, as opposed to the usual fixed grid. App icon colors will also be customizable, allowing users to make apps any color they want or even match their home screen. Users can also change app icons to dark mode.

  • After five years of remaining untouched, the Control Center received several upgrades, including the ability to customize its toggles, like for flashlight, screen recording, calculator, auto-rotate, screen mirroring, and more, by tapping, holding, and rearranging. The Control Center toggle will also feature different pages with completely customizable controls for users. Developers can also create controls for their apps.

  • Apple also added privacy options, including the ability to lock an app, which requires users to authenticate with FaceID or passcode before accessing the app. Users can also hide an app, which makes it disappear from the home screen to a hidden part of their app library.

  • Messages received several upgrades. Tapbacks, the feature that allows users to react to messages by holding them down, was upgraded to feature different colors and include emojis. Users can add text effects to specific phrases or words instead of the entire phrase. Lastly, users will be able to schedule messages.

  • iPhone 14 and later models will have a new Messages via Satellite feature, which allows users to send messages via satellite when they don't have Wi-Fi or cellular service.

  • The Mail app will automatically categorize emails, a feature that will be available later this year.

  • The Wallet app now allows users to tap phones together to exchange Apple Cash without requiring them to exchange personal information like phone numbers.

  • The Journal app will now show more statistics and insights, including how many entries you've had this year, how many days you journaled, and more.

  • There is a new Game Mode for iPhone, meant to help gamers optimize their gaming experience. This includes minimizing background activity and using more responsive accessories, such as controllers.

  • The Photos app got what Apple dubbed its "biggest redesign ever," featuring a cleaner design and an improved search.

  • Apple reiterated that RCS support will be coming to the iPhone.

  • The Calendar app can now pull from the Reminders app for a more seamless overview of your schedule.

AirPods

  • AirPods Pro are getting voice isolation to enhance call quality in noisier environments.

  • You can now nod or shake your head "yes" or "no" when responding to Siri.

  • Apple is also releasing a Personalized Spatial Audio API for game developers to build around the AirPods' audio technology.

Actor and music title insights on TvOS

tvOS 18

  • When you watch an Apple TV show or movie, the new Insights section on tvOS will include additional information such as actor names and music titles. You can then easily add those music titles to your Apple Music playlist.

  • Apple added support for 21:9 formatting for viewing widescreen films.

  • tvOS will also feature a redesigned Apple Fitness+ experience, with enhanced dialogue, improved subtitles, and new AppleTV screen savers.

WatchOS 11

  • The new Training Mode allows users to get insights into how the intensity of their workouts is impacting their performance in the long run.

  • The new Vitals app will give users a quick look at their most important health metrics. If something seems out of the ordinary, users will receive pings alerting them of the anomaly.

  • Cycle tracking is getting upgraded to better suit pregnancy, showing gestational age, and more.

  • Smart Stack is also getting more intelligent, now able to automatically add widgets when needed and more.

iPadOS 18

  • The update will feature a redesigned tab bar and side bar, which users can customize to showcase their favorite apps and access the most important sections of an app. You can also long-tap the bar to move it around.

  • Shareplay will allow users to remotely control someone else's iPad or iPhone and share drawings on their screens.

  • In a long-awaited release, iPads will now have a calculator app for the first time, complete with the same interface as the one currently found on iPhones. Plus, you can use it with the Apple Pencil through a new Math Notes experience, which allows users to write expressions that the calculator app will solve for you once you type the equal sign.

  • Handwriting in Notes also got an upgrade with Smart Script, which refines users' writing to make it more legible while keeping the authenticity of the user's handwriting style. The feature can even match copied and pasted text to the user's handwriting. Spell check is also still compatible with it.

  • iPadOS 18 supports screen-sharing via SharePlay and the same Control Center customizations and emoji Tapbacks found in iOS 18.

MacOS 15/Sequoia

  • Apple unveiled MacOS Sequoia, which will feature many of the new features that were added to iOS 18 and iPadOS 18.

  • The new iPhone mirroring capability on Mac allows users to experience their phone almost entirely from their Mac. For example, iPhone notifications will now be available on Mac, allowing users to interact with them and open corresponding apps, though the iPhone itself will appear locked.

  • Video meetings are also getting an upgrade, with new backgrounds and previews that allow you to see what you are about to share before sharing it.

  • Apple launched its take on password management services with its own Passwords app.

  • The AI summarization tool will live in Safari to help users process content like web pages and articles more efficiently. Safari will also assist users in discovering more helpful information about a page they are browsing when relevant, such as directions.

  • Apple also launched a new Viewer experience, which does for video what Reader does for text.

Apple Intelligence

  • Apple unveiled what it calls its new "personal intelligence" system under the name Apple Intelligence. The release puts generative models at the heart of the ecosystem of Apple devices.

  • With Apple Intelligence, your iPhone can prioritize notifications to ensure you get notified only when it's crucial throughout your day.

  • The release includes writing tools that leverage AI, including rewriting, proofreading, and summarizing text features available across mail, keynotes, third-party apps, and more.

  • Users can now create personalized images in the photo library, including sketches, illustrations, and animations. This feature is available in Messages, Apps, Freeform, Keynote, and Pages.

  • Apple Intelligence can tap into tools and carry out tasks on your behalf, such as "Show me all the photos," "Play the podcast," or "Pull the files that my coworker shared with me last week."

  • Because it's grounded in your personal information and context, and can retrieve data from across your apps and reference the content on your screen, Apple Intelligence is positioned to be your personal assistant.

  • Apple emphasized the safety and privacy precautions built into Apple Intelligence, particularly for on-device intelligence processing. The company touted the security of Apple's silicon, A17 Pro, and its M family of chips (M1, M2, M3, and M4).

  • For tasks that are too large for on-device processing and need to be completed into the cloud, Apple unveiled Private Cloud Compute, which protects users' privacy by running on servers specially created using Apple Silicon. When users make requests, Apple Intelligence first tests on-device capability, but calls on Private Cloud Compute if the task requires more compute power. Apple reiterated that user data is never stored or sold to external parties.

  • Siri finally got the AI makeover it deserves, first with a new look: when tapped, a light wraps around the edges of your screen. Siri can now better understand users, even if they stutter, due to more advanced natural language processing (NLP). It now has conversational context, remembering what you just said and using it to carry out the next task. Users can also type requests to Siri. Because it has in-depth product knowledge, Siri can answer questions about functionality on iPad, iPhone, and Mac. Siri will also have Apple Intelligence's on-screen awareness, allowing it to take action on what it is viewing. The voice assistant can also take actions across apps, including photo editing. With access to your personal context, Siri can understand and complete new commands, such as pulling your driver's license information from a photo and automatically inputting it into a form. The Siri updates are coming to iPad and Mac, too.

  • Apple Intelligence also powers new features in Mail, including Rewrite, which offers users different versions of what they have already written. Suggestions are shown in-line, and Proofread edits forgrammar, word choice, and sentence structure. You can also use Summarize to convert your text into bullet points. Smart Reply identifies intelligent selections of an email and uses them to help craft a custom message. Browsing an inbox will also be easier with summaries populated at the top of emails. Apple Intelligence can even help prioritize your emails, placing what is most important at the top of your inbox.

  • There is an all-new focus option: reduce interruptions. When in this setting, your phone will only show you what is most important based on your personal activity and context.

  • Genmoji allows users to create AI-generated emojis based on what they type. You can also create a Genmoji based on a photo of a friend. Genmojis can be included in-line in Messages and even used for Tapbacks.

  • Image Playground allows users to leverage AI on-device to create images from text prompts, which can be easily shared in iMessage and elsewhere. The feature is also available in Keynote, Pages, and Freeform, and as a stand-alone Image Playground app.

  • Image Wand in the Notes app transforms a rough sketch into a polished image and is available directly in the tool palette. For example, you can circle a rough sketch in Notes and open Image Playground to transform your doodle into a fully-fledged image.

  • Apple Intelligence will also upgrade the Photos app with a new clean-up tool that removes unwanted objects. Search in videos allows users to easily find specific snippets of content, and users can create Memories on-demand, using text to edit and organize photos into movies.

  • In the Notes app, users can record and transcribe audio, which Apple Intelligence will generate a text summary of. This experience is also available in the Notes phone app.

  • Apple Intelligence is free on iOS 18, iPadOS 18, and MacOS Sequoia, and will be available to try in English only this summer.

Partnership with OpenAI

  • Apple also confirmed its partnership with OpenAI by integrating ChatGPT with Siri, which can send a request to ChatGPT for help with a user's permission. For example, if you ask Siri for assistance on a task it deems ChatGPT could answer better, Siri will suggest you use ChatGPT instead. ChatGPT's writing capabilities can also be leveraged within certain writing tasks.

  • Users can access ChatGPT via this integration for free, and their data will not be logged by OpenAI. ChatGPT Plus users can connect their subscriptions to access more advanced features.

  • The ChatGPT integration will be coming to iOS 18, iPadOS 18, and MacOS Sequoia later this year.

Featured

OpenAI Appoints Sarah Friar as CFO and Kevin Weil as CPO

OpenAI has announced the addition of two new leaders to its executive team, Sarah Friar as Chief Financial Officer and Kevin Weil as Chief Product Officer. This move is aimed at advancing OpenAI’s mission of conducting world-leading research and safely deploying AI products.

“Sarah and Kevin bring a depth of experience that will enable OpenAI to scale our operations, set a strategy for the next phase of growth, and ensure that our teams have the resources they need to continue to thrive,” said OpenAI chief Sam Altman.

Sarah Friar, who most recently served as CEO of Nextdoor and previously as CFO at Square, will lead OpenAI’s finance team. Her role involves supporting core research capabilities and scaling operations to meet the demands of OpenAI’s growing customer base and complex global environment.

Friar is also a Board Member of Walmart and Consensys, a Fellow of the Aspen Institute, and Co-Chair of the Stanford Digital Economy Lab.

“I’m honored to join a team that is uniquely talented and mission-focused,” said Friar. “My goal is to help OpenAI continue excelling at what it does best—producing top-tier research and collaborating to maximize the benefits of AI tools for everyone.”

Kevin Weil, previously President of Product and Business at Planet Labs, will head the product team at OpenAI. He will focus on applying OpenAI’s research to develop products and services for consumers, developers, and businesses.

Weil has held significant positions including VP of Product for Novi at Facebook, VP of Product at Instagram, and SVP of Product at Twitter. He is also involved with the Council on Foreign Relations and serves on the boards of The Nature Conservancy and Black Product Managers Network.

“The product team at OpenAI has set the pace for both breakthrough innovation, and thoughtful deployment of AI products,” said Weil. “I am thrilled to be part of the next phase of growth, as we continue to safely and responsibly build towards AGI.”

I’m super excited to announce that I’m joining @OpenAI as Chief Product Officer!
My entire career has been about working on big missions: connecting people and ideas at @Instagram and @Twitter, digitizing financial services at Libra/Novi, or most recently at @Planet, using space…

— Kevin Weil 🇺🇸 (@kevinweil) June 10, 2024

“I’m super excited to announce that I’m joining OpenAI as Chief Product Officer! My entire career has been about working on big missions: connecting people and ideas at Instagram and Twitter, digitising financial services at Libra/Novi, or most recently at Planet, using space to advance global security, sustainability, and transparency,” he posted on X.

The post OpenAI Appoints Sarah Friar as CFO and Kevin Weil as CPO appeared first on AIM.

Using AI to Keep Scammers and Fraudsters (a Little More) Honest

Fraudsters and scammers have had a good run lately according to the FTC, which found Americans lost $10 billion to frauds and scams in 2023, a 14% increase over the year before. But victims and law enforcement are fighting back with data and AI, and two startups named Valid8 and Scamnetic are hoping to make a dent.

There are important differences between frauds and scams, and different legal liabilities provide different incentive structures to eliminate them. For instance, if you fall victim to fraud, such as a cybercriminal stealing your identity and opening a new credit card account in your name, then the credit card company bares legal liability. However, if a cybercriminal tricks you into transferring money from your bank account into his, then you have fallen victim to a scam and the bank is not liable.

“With a scam, the scammers are manipulating you to take action on their behalf,” says Al Pascual, the CEO of Scamnetic and an expert on fraud and scams. “They’re trying to get you to send them money or give them access to something. But you’re the one who is logging into the site. You’re authenticating. You control the account, and you’re moving the money.”

Because banks are legally liable for credit card fraud, they invested vast sums of money to enable them to detect fraud in real time. Technical innovations such as EMV chip embedded in credit cards, as well as sophisticated machine learning systems that can detect fraud within milliseconds, has resulted in the rate of credit card fraud falling significantly.

(Source: FTC)

“The criminals realize rather than trying to go out and steal information about you, pretend to be you, and then hope I can get access to things, all I have to do is call you or email you,” Pascual said. “We’re talking tens of trillions or more of communications [per year] that are going out from scammers.”

The world of scams is blossoming at the moment, Pascual said. There are spray-and-pray scams, where the bad guys send out billions of emails, texts, and other messages. There are work-from-home schemes that lead you to become a check-cashing mule. There are spear-phishing scams where bad guys target individuals based on stolen data. Romance scams are popular. So are the grandchild-who-needs-money scams. And then there are the pig butchering scams, where the crooks take you for every penny.

“Some of them are just about kind of turn-and-burn. Some of them play the long game,” Pascual told Datanami. “I don’t know if you’ve gotten some of these texts yet, but they come across as super innocent. It’s like ‘Hey, I forgot, I forgot my golf clubs in your car’ or ‘Hey, I gotta give you that money for lunch.’ You’re like, who’s this?’ And then all of a sudden, you have a relationship with them.”

Since banks aren’t weren’t liable when clients fell victim to scams, they didn’t do much of anything to stop them, besides putting out the occasional educational email, said Pascual, who spent his career fighting fraud and scammers at banks and other institutions.

“When banks tackle these issues, they’re applying all kinds of advanced technology,” he said. “They’re applying machine learning. They’re applying threat intelligence, third party data sources. They’re really doing whatever it takes to mitigate this risk, because the potential loss is astronomical. So they’re very methodical and they’re constantly in the weeds with the bad guy trying to figure out what they’re doing right and then coming up with best practices.”

Pascual founded Scamnetic to build the same sort of system to protect individuals from scammers and scams. The goal with Scamnetic is to build a real-time shield that protects consumers from scammers by identifying when a consumer likely is being exposed to a scam via phone, text, email, social media, or other websites.

“I took the same approach as the banks, which was identifying their MO [modus operandi], their tactics, techniques, and processes, and then identifying how we can take everything that they do, break it down and use it as a source of signals for as to whether or not there’s risk,” Pascual said. “And then once we’ve identified the areas of risk, we can then begin to apply controls to it.”

There are 25 to 30 types of scams being perpetrated in the US and Canada, and Scamnetic uses AI and machine learning to identify when a given communication is likely a scam.

“We know exactly what we need for each of these scams and that’s what we’re focused on. We’re leveraging that to get smart and be able to stay in line with the bad guy too,” he said. “My goal is literally just to kick their ass.”

When it launches next quarter, Scamnetic will be available via banks and telecommunications companies. Pascaul is taking the B2B2C model because it will help him get the product into the consumer realm faster.

Banks are motivated to do something about scams because regulators in the UK and Australia recently began cracking down on scams by requiring the bank that receives a transfer to know who the person receiving the money is, he said. That is driving interest among banks to do something about scams.

“We protect people from scams,” he said. “Are we going to be perfect every time? No, but are we going to be a thousand times better than human intuition? Oh yeah.”

Using AI to Detect Corporate Fraud

Fraud is much more prevalent in the corporate world than one may have thought. In fact, a recently study found that upwards of 10% of corporations are committing securities fraud each year, costing $830 billion in equity.

“We’re in the golden age of fraud, meaning money has been really cheap for a very long time,” says Chris McCall, co-founder and CEO of Valid8. “The tide hasn’t gone out in so long that I think a lot of stuff is going on right now, and it’s just covered up by the positive macroeconomic conditions that we’ve seen.”

Fraud is surprisingly easy to pull off, and it’s usually not that complex, McCall said. The conditions for fraud are ripe when a trusted employee has control over an account and moves money where it’s not supposed to go. “That’s it,” he said. “It’s as simple as that.”

(Tero-Vesalainen/Shutterstock)

But detecting the fraud and then proving it in court can be very difficult, and that’s where McCall’s startup Valid8 comes in.

Valid8 uses AI and machine learning techniques to help clients make sense of financial transactions. The company, which sells exclusively to law enforcement, attorneys, and auditors, basically makes sure that the actual flow of money, as evidenced by data from banks, is consistent with what the corporate records indicate. If they’re not consistent, that’s a sign of some funny business going on, possibly fraud (but sometimes there are other explanations too).

“We base everything on evidence,” McCall said. “The largest category of occupational fraud in the United States is asset misappropriation. What’s asset misappropriation? Well, it’s writing a check to someone and then having it coded to somebody else. So what we do is we would actually go look at the check, find the payee and payor, see the amount of money that was actually taken out of that account from the statement, and then compare it to the check ledger that’s in the ERP system itself.”

The big challenge is that gathering all that evidence is hard, tedious work. Optical character recognition (OCR) technology helps automate the job, but it doesn’t go far enough. Valid8 uses computer vision tech to help it make sense of hundreds of thousands of documents involved in a case. It also uses classification and categorization machine learning algorithms, McCall said.

“Any type of financial evidence–think bank statements, copies of checks, wire details, transaction lists from spreadsheet–we use AI to pull it all in and then we do all kinds of QA [quality assurance] checks to tie everything back to the evidentiary documents,” McCall said. “Anywhere professionals have to follow the money, that’s what we’re designed to do.”

One of Valid8’s customers is Alvarez & Marsal, one of the consulting firms brought in to help clean up the mess at FTX, the crypto bank that collapsed in a giant puddle of fraud. Its software also helped convict a mortgage broker who had raised $300 million from 750 investors. Most of the time, however, there is no fraud, according to McCall.

“We’re providing a lens of transparency, shining a spotlight on this stuff, and a lot of times, nothing’s wrong,” he said. “But in the cases where something is wrong, it’s almost always because no one’s really looked at it and that’s how they’re getting away with it.”

Valid8 has to be 100% right for the data to hold up in court, so there’s no room for mistakes. Things can get hairy when a new set of documents comes in, but they’re formatted a little differently than the first set. Without AI helping to make sense of the raw documents, it would take five to 10 times as much time and effort to assemble a complete record, McCall said. That might make it prohibitively expensive to prepare a case, which just encourages more bad behavior on the part of criminals trying to hide their nefarious deeds.

“How do you take a look at exactly what’s happening? It’s really hard to do,” he said. “That’s where AI comes in. If you can get transparency into that complex data set so you can start seeing things and looking at patterns and trying to identify certain mechanisms or how payments are being made or when they’re being made. All of a sudden you can stop a lot of that stuff. But until you get access to the data, it’s kind of the Wild West.”