Meta Reports its Most Profitable Quarter Since 2021, Stocks Surge 7% 

Meta Platforms, Inc. reported strong financial results for the second quarter of 2023. The company’s revenue for the quarter was $31.999 billion, showing an 11% increase compared to the same period in 2022. Despite higher expenses, Meta’s income from operations rose by 12% year-over-year, reaching $9.392 billion. The net income for the quarter was reported to be $7.788 billion, a significant 16% increase from Q2 2022, and Diluted Earnings per Share (EPS) stood at $2.98, showing a 21% increase from the same period last year.

“We had a good quarter. We continue to see strong engagement across our apps and we have the most exciting roadmap I’ve seen in a while with Llama 2, Threads, Reels, new AI products in the pipeline, and the launch of Quest 3 this fall,” Meta founder and CEO Mark Zuckerberg said during the earnings call.

This marked Meta’s most profitable quarter since 2021, partially driven by significant revenue growth from their short-form video product, Reels, leading to a 7% surge in the company’s stock during after-market trading.

The company experienced significant growth in user engagement, with Family Daily Active People (DAP) averaging 3.07 billion for June 2023. Family Monthly Active People (MAP) reached 3.88 billion as of June 30, 2023, indicating a 6% increase year-over-year. Facebook Daily Active Users (DAUs) were 2.06 billion on average for June 2023, representing a 5% increase from the same period in 2022. Facebook Monthly Active Users (MAUs) were 3.03 billion as of June 30, 2023, showing a 3% increase year-over-year.

In terms of advertising, ad impressions delivered across their Family of Apps increased by 34% year-over-year in Q2 2023. However, the average price per ad decreased by 16% year-over-year.

Meta’s financial position as of June 30, 2023, showed $53.45 billion in cash, cash equivalents, and marketable securities. On the other hand, the company’s long-term debt reached $18.38 billion as of the same date.

The company implemented restructuring measures to enhance efficiency and align its business and strategic priorities, with most of the planned employee layoffs completed as of June 30, 2023. Further assessments were ongoing regarding facilities consolidation and data center restructuring initiatives.

For the outlook of the third quarter of 2023, Meta expects total revenue to be in the range of $32-34.5 billion, with a foreign currency tailwind of approximately 3% to year-over-year total revenue growth. The company also anticipates total expenses for the full year 2023 to be in the range of $88-91 billion. However, Meta foresees higher operating losses in 2023 for Reality Labs due to investments in augmented reality/virtual reality and ecosystem scaling.

The company acknowledges potential regulatory challenges in the EU and the US, which could significantly impact their business and financial results.

Overall, Meta’s financial results for Q2 2023 showcase positive growth and performance across various metrics, alongside an optimistic outlook for the future. The company’s focus on the metaverse, AI, and ongoing product development efforts will be closely monitored by investors and analysts.

The post Meta Reports its Most Profitable Quarter Since 2021, Stocks Surge 7% appeared first on Analytics India Magazine.

Meta reports 11% revenue growth, but the metaverse still suffers

Meta reports 11% revenue growth, but the metaverse still suffers Amanda Silberling 8 hours

Meta had one of its best quarters since before it changed its name from Facebook. The company reported 11% year-over-year revenue growth, which is a sigh of relief for investors, because this time last year, Mark Zuckerberg’s company posted its first ever quarterly revenue decline. Meta’s stock price reflects this — while the stock took a nose dive through 2022, it’s now climbing back up again, trading at around $298 per share after markets closed today.

Part of why Meta is making more money is because it’s laid off more than 20,000 employees, which makes personnel costs much lower. Though this correction for over-hiring has serious implications for those who lost their jobs, investors seem to be pleased. Meta calls this downsizing “The Year of Efficiency,” which is a much more palatable way to say “mass layoffs” on an earnings call.

The revenue growth isn’t just a result of lower costs, though. Zuckerberg says that Reels have hit 200 billion daily plays across Instagram and Facebook, and its monetization revenue run rate has increased to more than $10 billion, up from $3 billion in the fall. And though Meta hasn’t started monetizing Threads yet, the new competitor to Twitter (or are we saying X now?) is off to a solid start, according to Zuckerberg.

Though much at Meta is looking up, Reality Labs continues to struggle. Meta’s VR and AR products brought in just $276 million, while Reality Labs lost $3.7 billion this quarter. Across all of 2022, Reality Labs lost Meta $13.7 billion.

Meta CFO Susan Li writes in the Q2 2023 earnings report that the company anticipates Reality Labs’ operating losses to get even larger. But the general public still seems uninterested in Meta’s vision for a metaverse. Apple also entered the AR/VR ring this quarter, announcing its $3,499 Vision Pro AR headset. This long-awaited product is not quite within financial reach for the average consumer, but Meta is gearing up for the fall, when it will compete with Apple and release its Quest 3 headset. The Quest 3, a mixed reality headset, will retail for $499. That’s not chump change, but it’s a lot more accessible than $3,499.

Despite Reality Labs’ massive costs, Zuckerberg remains optimistic (at least when speaking publicly with investors, who remain skeptical that these investments will pay off).

“We’re here to build awesome experiences that help people connect,” he said. “I think helping to shape the next platform is going to unlock that in a profound way for decades to come.”

It’s reminiscent of early last year, when he unabashedly told investors that even though Reality Labs is bleeding money, the 2030s will be exciting.

“I can’t guarantee you that I will be right about this, but I do think this is the direction that the world is going in,” Zuckerberg said.

Meta’s $499 Quest 3 headset arrives this fall

Meta conducts yet another round of layoffs

Amazon hones in on generative AI at AWS Summit and unveils new AI projects

AWS 2023

Amazon is most commonly associated with its e-commerce platform, which has become a giant in the industry due to its ability to sell everything you can think of and deliver them to your doorstep within two days with a membership.

However, Amazon also has a strong presence in cloud computing and is about to become more involved with generative AI.

Also: How UPS workers' big contract win could impact Amazon

On Tuesday, Amazon held its AWS (Amazon Web Services) Summit in New York, an event focused on Amazon's work in the cloud that features expos, learning sessions, and a keynote address.

This year, Amazon used the platform to unveil several significant generative AI announcements that will optimize the creation of AI platforms for developers and ease AI integration for enterprises.

To build and power an AI model, there are several components, beginning with the actual chips you will use to power the model, then building and training the model, and finally applying the model in the real world.

Also: Generative AI will soon go mainstream, say 9 out of 10 IT leaders

AWS's announcements today help optimize every step of the process. Here is a roundup of some of the most noteworthy announcements.

AWS HealthScribe

AWS HealthScribe is a HIPPA-eligible generative AI-powered service that transcribes conversations between patients and clinicians and creates clinical documents.

The generated clinical notes include summaries of the patient-clinician interaction, AI-generated insights, references to the original transcription, and structured medical terms.

The overall purpose of this service is to curtail the amount of time clinicians spend having to write detailed documentation and allow them to use that time towards something of more value, such as face-to-face time with patients.

"With AWS HealthScribe, healthcare software providers can use a single API to automatically create robust transcripts, extract key details (e.g., medical terms and medications), and create summaries from doctor-patient discussions that can then be entered into an electronic health record (EHR) system," said Amazon in the press release.

Also: This is how generative AI will change the gig economy for the better

To address privacy concerns, Amazon shares that the model has data security and privacy built-in, not retaining any customer data after processing it and encrypting customer data in transit.

This isn't the first time generative AI has been geared towards the medical field, as seen by Google's launch of Med-PaLM 2 in April.

Free and low-cost AWS generative AI courses

Generative AI has surged in popularity since the release of ChatGPT in November. The technology has the potential to improve workflows and productivity across industries and as a result, has become a skill highly sought out by employers.

Despite the benefits and popularity of generative AI, many people feel like they don't have the proper knowledge to use AI correctly, as previously reported on by ZDNET.

AWS added seven different generative AI courses for people of all skill sets and experiences. The courses cover different generative AI aspects, including topics as complex as learning how to build using Amazon CodeWhisperer or as big picture as learning about different ways to use AI for your business.

You can browse the courses here.

Amazon Bedrock updates

To help developers build their foundational models, Amazon launched its foundational model service Amazon Bedrock back in April.

With Amazon Bedrock, developers can choose which foundational model they want for their specific use case. The choices included Amazon's Titan, Anthropic's Claude, Stability.ai's Stable Diffusion, and AI21 Labs Jurassic-2, until today.

Also: The best AI art generators

At AWS, Amazon announced that the choices will include Claude 2, Anthropic's latest LLM, SDZL 1, Stability AI's latest text-to-image model, and a brand new foundational model — Cohere.

By expanding the range of available models, customers can more easily find the one that best fits their needs.

Amazon Bedrock is also introducing agents, which allow developers to build AI applications using proprietary data without manually training the model on the data.

Since agents will allow applications to access organization-specific data, developers will be able to create applications capable of accomplishing a broader range of tasks with up-to-date answers.

These Amazon Bedrock new features are available in preview today.

Vector engine for Amazon OpenSearch Serverless

When you input a prompt into a generative AI model, the prompt you enter is conversational, and the output you receive is conversational, too, as seen by the rise of massively popular chatbots.

Many of these generative AI applications use vector embedding, numerical values given to text, image, and video data that show contextual relationships between data, to help generate accurate responses.

Also: 6 skills you need to become an AI prompt engineer

AWS's vector engine for Amazon OpenSearch Serverless makes it easier for developers to search embeddings and incorporate them into LLM applications.

Now available in preview, the new vector engine allows developers to store, search, and retrieve billions of vector embeddings in real time without worrying about the underlying infrastructure, according to the release.

This will make it easier for developers to fine-tune models, and as a result, create models that produce better, more accurate results.

Public availability of Amazon EC2 P5

In March, Amazon announced its Amazon Elastic Compute Cloud (Amazon EC2) P5 Instances powered by Nvidia H100 Tensor Core GPUs and meant to deliver the compute performance needed to build and train machine learning (ML) models.

These instances can provide up to six times faster training times compared to the previous model, and reduce up to 40% of training costs, according to Amazon.

Today, the Amazon Elastic Compute Cloud (Amazon EC2) P5 Instances became generally available.

Artificial Intelligence

Google and Microsoft partner with OpenAI to form AI safety watchdog group

ai-gettyimages-1373071138

Some of the biggest names in technology are coming together to help keep AI safe. Tech giants Google and Microsoft are partnering with OpenAI and Anthropic to ensure that AI technology progresses in a "safe and responsible" fashion.

This partnership comes after a White House meeting last week with the same companies, plus Amazon and Meta, to discuss the secure and transparent development of future AI.

Also: Generative AI will soon go mainstream, say 9 out of 10 IT leaders

The watchdog group, called the Frontier Model Forum, noted that while some governments around the world are taking measures to keep AI safe, those measures aren't enough. Instead, it's up to technology companies.

In a blog post on Google's site, the forum listed four main objectives:

  • Improving AI safety research to minimize future risks
  • Identifying best practices for responsible development and deployment of future models
  • Working with policymakers, academics, and others to share knowledge about AI safety risks
  • Supporting the development of AI uses in solving humanity's greatest problems like climate change and cancer detection

An OpenAI representative added that while the forum would work with policymakers, they wouldn't get involved with government lobbying.

Membership in the forum is open to any company that meets three criteria: developing and deploying frontier models, demonstrating a strong commitment to frontier model safety, and a willingness to participate in joint initiatives with other members. What's a "frontier model?" The forum defines that as "any large-scale machine-learning models that go beyond current capabilities and have a vast range of abilities."

Also: AI has the potential to automate 40% of the average work day

While AI has world-changing potential, the forum says, "appropriate guardrails are required to mitigate risks." Anna Makanju, vice president of global affairs at OpenAI, issued a statement: "It's vital that AI companies — especially those working on the most powerful models — align on common ground. This is urgent work. And this forum is well-positioned to act quickly to advance the state of AI safety."

Microsoft President Brad Smith echoed those sentiments, saying that it's ultimately up to companies creating AI technology to keep them safe and under human control.

Artificial Intelligence

Stability AI releases its latest image-generating model, Stable Diffusion XL 1.0

Stability AI releases its latest image-generating model, Stable Diffusion XL 1.0 Kyle Wiggers 9 hours

AI startup Stability AI continues to refine its generative AI models in the face of increasing competition — and ethical challenges.

Today, Stability AI announced the launch of Stable Diffusion XL 1.0, a text-to-image model that the company describes as its “most advanced” release to date. Available in open source on GitHub in addition to Stability’s API and consumer apps, Clipdrop and DreamStudio, Stable Diffusion XL 1.0 delivers “more vibrant” and “accurate” colors and better contrast, shadows and lighting compared to its predecessor, Stability claims.

In an interview with TechCrunch, Joe Penna, Stability AI’s head of applied machine learning, noted that Stable Diffusion XL 1.0, which contains 3.5 billion parameters, can yield full 1-megapixel resolution images “in seconds” in multiple aspect ratios. “Parameters” are the parts of a model learned from training data and essentially define the skill of the model on a problem, in this case generating images.

The previous-gen Stable Diffusion model, Stable Diffusion XL 0.9, could produce higher-resolution images as well, but required more computational might.

“Stable Diffusion XL 1.0 is customizable, ready for fine-tuning for concepts and styles,” Penna said. “It’s also easier to use, capable of complex designs with basic natural language processing prompting.”

Stable Diffusion XL 1.0 is improved in the area of text generation, in addition. While many of the best text-to-image models struggle to generate images with legible logos, much less calligraphy or fonts, Stable Diffusion XL 1.0 is capable of “advanced” text generation and legibility, Penna says.

And, as reported by SiliconAngle and VentureBeat, Stable Diffusion XL 1.0 supports inpainting (reconstructing missing parts of an image), outpainting (extending existing images) and “image-to-image” prompts — meaning users can input an image and add some text prompts to create more detailed variations of that picture. Moreover, the model understands complicated, multi-part instructions given in short prompts, whereas previous Stable Diffusion models needed longer text prompts.

Stable Diffusion XL 1.0

Image Credits: Stability AI

“We hope that by releasing this much more powerful open source model, the resolution of the images will not be the only thing that quadruples, but also advancements that will greatly benefit all users,” he added.

But as with previous versions of Stable Diffusion, the model raises sticky moral issues.

The open source version of Stable Diffusion XL 1.0 can, in theory, be used by bad actors to generate toxic or harmful content, like nonconsensual deepfakes. That’s partially a reflection of the data that was used to train it: millions of images from around the web.

Countless tutorials demonstrate how to use Stability AI’s own tools, including DreamStudio, an open source frontend for Stable Diffusion, to create deepfakes. Countless others show how to fine-tune the base Stable Diffusion models to generate porn.

Penna doesn’t deny that abuse is possible — and acknowledges that the model contains certain biases, as well. But he added that Stability AI’s taken “extra steps” to mitigate harmful content generation by filtering the model’s training data for “unsafe” imagery, releasing new warnings related to problematic prompts and blocking as many individual problematic terms in the tool as possible.

Stable Diffusion XL 1.0’s training set also includes artwork from artists who’ve protested against companies including Stability AI using their work as training data for generative AI models. Stability AI claims that it’s shielded from legal liability by fair use doctrine, at least in the U.S. But that hasn’t stopped several artists and stock photo company Getty Images from filing lawsuits to stop the practice.

Stability AI, which has a partnership with startup Spawning to respect “opt-out” requests from these artists, says that it hasn’t removed all flagged artwork from its training data sets but that it “continues to incorporate artists’ requests.”

“We are constantly improving the safety functionality of Stable Diffusion and are serious about continuing to iterate on these measures,” Penna said. “Moreover, we are committed to respecting artists’ requests to be removed from training data sets.”

To coincide with the release of Stable Diffusion XL 1.0, Stability AI is releasing fine-tuning feature in beta for its API that’ll allow users to use as few as five images to “specialize” generation on specific people, products and more. The company is also bringing Stable Diffusion XL 1.0 to Bedrock, Amazon’s cloud platform for hosting generative AI models — expanding on its previously-announced collaboration with AWS.

The push for partnerships and new capabilities comes as Stability suffers a lull in its commercial endeavors — facing stiff competition from OpenAI, Midjourney and others. In April, Semafor reported that Stability AI, which has raised over $100 million in venture capital to date, was burning through cash — spurring the closing of a $25 million convertible note in June and an executive hunt to help ramp up sales.

“The latest SDXL model represents the next step in Stability AI’s innovation heritage and ability to bring the most cutting-edge open access models to market for the AI community,” Stability AI CEO Emad Mostaque said in a press release. “Unveiling 1.0 on Amazon Bedrock demonstrates our strong commitment to work alongside AWS to provide the best solutions for developers and our clients.”

Stability AI Unveils Stable Diffusion XL 1.0

AI startup Stability AI has taken a significant leap forward in the realm of generative AI models. The company recently announced the release of its latest text-to-image model, Stable Diffusion XL 1.0. This model, described as the company's “most advanced” to date, is available in open source on GitHub, as well as through Stability's API and consumer apps, Clipdrop and DreamStudio.

Enhanced Features and Performance

Stable Diffusion XL 1.0 boasts improvements in several key areas, including more vibrant and accurate colors, better contrast, shadows, and lighting. The model, which contains 3.5 billion parameters, can generate full 1-megapixel resolution images in seconds, in multiple aspect ratios. This is a significant upgrade from the previous model, Stable Diffusion XL 0.9, which required more computational power to produce high-resolution images.

The new model is also customizable and ready for fine-tuning for concepts and styles. It's capable of complex designs with basic natural language processing prompting. Furthermore, it has improved text generation capabilities, a feature that many text-to-image models struggle with.

Ethical Considerations and Future Developments

However, the release of Stable Diffusion XL 1.0 is not without its ethical challenges. The open-source nature of the model means it could potentially be used to generate harmful content, such as nonconsensual deepfakes. Stability AI acknowledges these concerns and has taken steps to mitigate harmful content generation, including filtering the model's training data for unsafe imagery and blocking problematic terms.

Despite these challenges, Stability AI remains committed to refining and improving its models. The company is also releasing a fine-tuning feature in beta for its API, allowing users to specialize generation on specific people, products, and more. Furthermore, Stability AI is bringing Stable Diffusion XL 1.0 to Bedrock, Amazon's cloud platform for hosting generative AI models.

The release of Stable Diffusion XL 1.0 represents a significant step forward in the field of generative AI models. It will be interesting to see how Stability AI continues to navigate the balance between innovation and ethical considerations in the future.

The advancements made by Stability AI with the release of Stable Diffusion XL 1.0 are impressive. However, the ethical challenges raised by such technology cannot be overlooked. It's crucial for AI companies to continue to prioritize ethical considerations and safeguards as they develop increasingly advanced models.

The Commercial Landscape and Future Plans

Stability AI's advancements come at a time when the company is experiencing a lull in its commercial endeavors. Despite having raised over $100 million in venture capital, the company has been burning through cash, leading to the closing of a $25 million convertible note in June. The company is also on an executive hunt to help ramp up sales.

However, Stability AI's CEO, Emad Mostaque, remains optimistic. He views the latest Stable Diffusion model as a testament to the company's commitment to innovation and providing the best solutions for developers and clients. The unveiling of Stable Diffusion XL 1.0 on Amazon Bedrock further demonstrates the company's dedication to working alongside AWS to provide the best solutions for developers and clients.

Stability AI's financial challenges highlight the difficulties faced by many AI startups. While the technology is advancing rapidly, turning these advancements into profitable business models can be a complex task. However, the company's commitment to innovation and its strategic partnerships suggest a promising future.

KDnuggets News, July 26: Free Generative AI Training from Google • Data Engineering Beginner’s Guide • GPT-Engineer: Your New AI Coding Assistant

Features

  • Free From Google: Generative AI Learning Path by Eugenia Anello
  • A Beginner’s Guide to Data Engineering by Bala Priya C
  • GPT-Engineer: Your New AI Coding Assistant by Matthew Mayo

From Our Partners

  • Unlock DataOps Success with DataOps.live: Featured in Gartner Market Guide! by DataOps.live
  • How SAS can help catapult practitioners’ careers by SAS
  • Advance your Career with the 3rd Best Online Master’s in Data Science Program by Bay Path University

This Week's Posts

  • Unveiling the Power of Meta’s Llama 2: A Leap Forward in Generative AI? by Matthew Mayo
  • GPT-4 Details Have Been Leaked! by Nisha Arya
  • Generative AI with Large Language Models: Hands-On Training by Abid Ali Awan
  • Forget PIP, Conda, and requirements.txt! Use Poetry Instead And Thank Me Later by Bex Tuychiev
  • Exploring the Power and Limitations of GPT-4 by Nate Rosidi
  • The Drag-and-Drop UI for Building LLM Flows: Flowise AI by Nisha Arya
  • What is Superalignment & Why It is Important? by Abid Ali Awan
  • Free Generative AI Courses by Google by Nisha Arya
  • Pandas: How to One-Hot Encode Data by Muhammad Arham
  • Unlock the Secrets to Choosing the Perfect Machine Learning Algorithm! by Dr. Roi Yehoshua
  • Textbooks Are All You Need: A Revolutionary Approach to AI Training by Matthew Mayo
  • Everything You Need About the LLM University by Cohere by Nisha Arya
  • Unlocking the Power of Numbers in Health Economics and Outcomes Research by Mayukh Maitra

From Around The Web

  • Scikit-learn Crash Course for Data Scientists by Data Science Horizons
  • Building Your mini-ChatGPT at Home by Adrian Tam
  • Python Decorators Unleashed [eBook] by Python Power Programming
  • Elegant prompt versioning and LLM model configuration with spacy-llm by Déborah Mesquita
  • AI researcher Geoffrey Hinton thinks AI has or will have emotions by Matthias Bastian
  • What We Know About LLMs (Primer) by Will Thompson

More On This Topic

  • A Beginner’s Guide to Data Engineering
  • GPT-Engineer: Your New AI Coding Assistant
  • KDnuggets News, July 13: Linear Algebra for Data Science; 10 Modern Data…
  • Mastering Generative AI and Prompt Engineering: A Free eBook
  • Free Generative AI Courses by Google
  • Free From Google: Generative AI Learning Path

Google, OpenAI, Microsoft and Anthropic Form Coalition for Responsible AI: Frontier Model Forum

Google, OpenAI, Microsoft and Anthropic Form Coalition for Responsible AI: Frontier Model Forum July 26, 2023 by Jaime Hampton

President Biden met with seven artificial intelligence companies last week to seek voluntary safeguards for AI products and discuss future regulatory prospects.

Now, four of those companies have formed a new coalition aimed at promoting responsible AI development and establishing industry standards in the face of increasing government and societal scrutiny.

“Today, Anthropic, Google, Microsoft, and OpenAI are announcing the formation of the Frontier Model Forum, a new industry body focused on ensuring safe and responsible development of frontier AI models,” a statement on OpenAI’s website read.

In what it says is for the benefit of the “entire AI ecosystem,” the Frontier Model Forum will leverage the technical and operational expertise of its member companies, the statement said. The group seeks to advance technical standards, including benchmarks and evaluations, and develop a public library of solutions.

Core objectives of the coalition include:

  • Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
  • Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
  • Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.
  • Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.

The Forum describes 'frontier models' as large-scale machine learning models that outperform the capabilities of current top-tier models in accomplishing a diverse array of tasks.

The group plans to tackle responsible AI development with a focus on three key areas: identifying best practices, advancing AI safety research, and facilitating information sharing among companies and governments.

“The Forum will coordinate research to progress these efforts in areas such as adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors and anomaly detection,” it said.

Other future actions listed include establishing an advisory board to guide the group’s strategy and priorities and also forming institutions like a charter, governance, and funding with a working group and executive board. The Forum indicated its support for existing industry efforts like the Partnership on AI and MLCommons and plans to consult with civil society and governments on “meaningful ways to collaborate.”

Though the coalition has just four members, it is open to organizations that actively develop and implement these frontier models, show a firm dedication to their safety through both technical and institutional means, and are ready to further the Forum's objectives through active participation in joint initiatives, according to a list of membership requirements.

“The Forum welcomes organizations that meet these criteria to join this effort and collaborate on ensuring the safe and responsible development of frontier AI models,” the group wrote.

These goals are still somewhat nebulous, as were the results of the White House meeting last week. In his remarks regarding the meeting, President Biden acknowledged the transformative impact of AI and the need for responsible innovation with safety as a priority. He also noted the need for bipartisan legislation to regulate the collection of personal data, safeguard democracy, and manage the potential disruption of jobs and industries caused by advanced AI. To achieve this, Biden said a common framework to govern AI development is needed.

President Biden delivers remarks following a meeting with seven prominent AI companies last week. (Source: White House)

"Social media has shown us the harm that powerful technology can do without the right safeguards in place. And I've said at the State of the Union that Congress needs to pass bipartisan legislation to impose strict limits on personal data collection, ban targeted advertisements to kids, and require companies to put health and safety first," the President said, noting that we must remain "clear-eyed and vigilant about the threats emerging technologies that can pose – don't have to, but can pose – to our democracy and our values."

Critics have said these attempts at self-regulation on the part of the major AI players could be creating a new generation of technology monopolies.

President of AI writing platform Jasper Shane Orlick told EnterpriseAI in an email that there is a need for an ongoing, in-depth engagement between the government and AI innovators.

“AI will affect all aspects of life and society—and with any technology this comprehensive, the government must play a role in protecting us from unintended consequences and establishing a single source of truth surrounding the important questions these new innovations create, including what the parameters around safe AI actually are,” Orlick said.

He continued: “The Administration’s recent actions are promising, but it’s essential to deepen engagement between government and innovators over the long term to put and keep ethics at the center of AI, deepen and sustain trust over the inevitable speed bumps, and ultimately ensure AI is a force for good. That also includes ensuring that regulations aren’t defusing competition creating a new generation of tech monopolies — and instead invites all of the AI community to responsibly take part in this societal transformation.”

Related

Shopify adds new AI tools for commerce. Here’s how you can use them for your business.

Shopify logo on a building

Shopify, the e-commerce company, quickly adopted AI and unveiled features like Shopify Magic, which uses AI to generate product descriptions.

Now, Shopify is building on Shopify Magic and adding several new AI features to the platform.

Also: Machine learning helps this company deliver a better online shopping experience

Every year, Shopify holds a semi-annual showcase of all of its latest products and innovations, and this year it used the opportunity to announce two new AI features, Sidekick and AI-driven email campaigns.

Sidekick is an AI-powered commerce assistant that can answer all user questions about business operations. Shopify CEO Tobi Lütke demoed the feature on his Twitter earlier this month.

Some examples of use cases include "how to set up a discount for a holiday sale" to "help me segment my customers so I can better engage them in my marketing," according to the release.

The AI-driven email campaigns feature is precisely what it sounds like — a tool that generates personalized emails from user prompts.

Also: This retailer is using RFID tags to make in-person clothes shopping less frustrating

You only have to enter a few words and Shopify's AI can generate tailored email newsletters, announcements, and more, according to the release.

In addition, the AI can make intelligent recommendations that drive click-through rates and increase engagement.

Artificial Intelligence

OpenAI pulls its own AI detection tool because it was performing so poorly

OpenAI logo on phone

When OpenAI debuted an AI detection tool less than six months ago, it admitted the feature designed to help users spot text written by artificial intelligence was "imperfect." Now the company has quietly pulled the feature due to its "low accuracy."

The announcement came in the form of an update to a January 2023 blog post where OpenAI announced the feature. A note at the top of the post now reads, "As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy." The update went on to say that the company was "currently researching more effective provenance techniques for text."

Also: How to use ChatGPT to write an essay

The feature, which was free to use, worked by taking text and assigning it to one of several categories. The AI would then analyze those categories and assign a ranking from "very unlikely" to "likely" written by AI.

When the tool debuted, Lama Ahmad, policy research director at OpenAI, told CNN that OpenAI didn't recommend using the tool in isolation because the company knew "it can be wrong and will be wrong at times." That may have been an understatement.

A study from earlier this month showed that AI detectors were especially bad at discerning content written by people who didn't speak English as their first language, with an average miss rate of 61%. One program in this study, however, incorrectly flagged a whopping 97% of human-written essays.

Earlier this year, ZDNET tested several AI detection tools and found similar results — the tools were fairly inaccurate and easy to trick.

Also: These are my 5 favorite AI tools for work

Other AI detectors do exist, but OpenAI taking its tool away from the public shows how challenging it is to accurately detect AI. And it's easy to see how this could go wrong. In an education or professional setting, unfairly accusing someone of plagiarism could have dire consequences.

As AI continues to progress, fears over its use to write essays and complete work are legitimate. But until detection tools improve significantly, we'll have to stick to policing it with human eyes.

Artificial Intelligence