LLMs: Does human text data make generative AI an entity?

White cyborg finger about to touch human finger 3D rendering

There is a recent interview, The Ethical Puzzle of Sentient AI, where a professor said, “But there’s also the problem that I’ve called the ‘gaming problem’ — that when the system has access to trillions of words of training data, and has been trained with the goal of mimicking human behavior, the sorts of behavior patterns it produces could be explained by it genuinely having the conscious experience. Or, alternatively, they could just be explained by it being set the goal of behaving as a human would respond in that situation. So I really think we’re in trouble in the AI case, because we’re unlikely to find ourselves in a position where it’s clearly the best explanation for what we’re seeing — that the AI is conscious. There will always be plausible alternative explanations. And that’s a very difficult bind to get out of.”

If an adult human is dropped in an environment where nothing is familiar, the chances for survival are slim, because in trying to know what something might be, it could be harmful, ending the quest.

Though lots of emphasis is placed on natural intelligence, ultimately, to the mind, what is prevalent is data, existing data. Many of what is referred to as intelligence is coalesced data. For example, figuring out a tough math question in seconds, the outcome could be labeled intelligence, but the way to do it exists as data on the mind.

It is with existing data humans relate with the world, with experiences for expansion. Existing data [or information] on the mind is almost as important as the ability to receive input [or sensory data] and process it. Processing or interpretation on the mind are done with what is already available. Sometimes, the data, for humans is the basis of existence, roles and so forth.

Computers are useful to people because they access an enormous amount of data. These data are applied as tools to doing. AI, however, has [human text] data and applies a mechanism to it, predicting the next word, in ways following a pattern of human communication.

What seems significant about LLMs is how they possess a form of existence for humans. Physical presence is not the only means of existence. There are writings and oral messages. In recent decades, there are audios and videos, as other ways to be, beyond being present.

Generative AI can produce images, people on video and fresh texts, doing some of what real people and writings do. A real human can mimic someone else’s voice, dressing, write similarly and when displayed digitally, may pass as the person, regardless of the information, real or fake. Those that see it may know it is a human behind. When AI does this, does it not do something that humans can do and fake too?

It is known that AI can’t feel, neither does it have emotions, so it is often dismissed by some as nothing. AI may have been nothing if it came up centuries ago in a non-digitized world. But in a digital driven world, where people exist on video calls, voice messages and communications, text chats, AI may have added to the digital population of the earth, not the individual population.

The texts that people write online, the video images or the audio interviews do not have consciousness or sentience, but represent people and have potency in their purposes. AI does not have consciousness or sentience, in the divisions of mind like emotions, perceptions, modulations, feelings, sensations, but it has data [or memory] which is quite dynamic, comparing to human minimum in the memory division.

There is something Generative AI has become is still uncomfortable for many to accept, but in what it means to have a digital existence, using the internet, AI is already an entity, making its regulation more complicated than if it were some regular software or some website.

WormGPT is a Warning for Enterprises to Upskill Their Employees

Ever since LLMs entered the mainstream, concerns have been raised over their capabilities to create large amounts of written content quickly and easily. Now, these concerns have come to fruit, as the black hat hacker community has finally tapped into the capabilities of LLMs for malicious attacks.

Reports have emerged that hackers have released a GPT-J powered hacking tool known as WormGPT. This tool capitalises on the already pervasive attack vector known as business email compromise, which is infamous for being one of the world’s top cyber threats.

Delving deeper into the effects of WormGPT only sheds further light on the urgent need for AI cybersecurity training. Hackers are getting their hands on more capable technology with the AI wave, putting the onus on companies to inform their workforce on the potential dangers of using AI.

WormGPT explained

Business email compromise, or BEC, is one of the most widely-used attack vectors for hackers to spread malicious payloads. In this method, hackers impersonate a party in business with a company to execute a scam. While these emails are usually flagged as spam or suspicious by email providers, WormGPT gives fraudsters a new set of tools. By creating a new model trained on a vast array of data sources” of malware-related data models including the open-source GPT-J, attackers are able to craft a convincing fake email to sell the act of impersonation.

According to a post on a commonly used hacker forum, WormGPT does not have any limitations like ChatGPT. It can generate text for a variety of black hat applications; a hacker term referring to illegal cyber activities. The model can also be run locally, leaving no trace on any servers as it would with an API. With the removal of safety rails from the model, the output that it can create is not regulated by any alignment method, offering an uncensored output ready for use in illegal actities.

The main issue with an application like WormGPT is the fact that it provides the ability to create clean copy to attackers whose first language is not English. Moreover, these emails also have a better chance of passing through spam filters, as they can be customised depending on the attackers’ requirements.

WormGPT greatly lowers the barrier for entry for hackers because it’s as easy to use as ChatGPT with none of the protections. Moreover, emails generated with this model also convey a professional tool, possibly increasing their efficacy in carrying out an attack.

For those hackers that are too cheap to pay for WormGPT, the aforementioned forum has multiple ChatGPT jailbreaks to help users extract malicious output from the consumer bot. AIM has covered the security issues of jailbreaks extensively in the past, but custom-trained models represent a new level of AI-powered attacks.

Coping up with AI attacks

As mentioned previously, BEC is one of the biggest cyberattack avenues. In 2022 alone, the FBI received over 21,000 BEC complaints, which totalled to losses of about $2.7 billion. What’s more, 2021 was the 7th year in a row that BEC was the top cyber threat for enterprises. Companies also suffer from leakage of sensitive information through BEC, which can further open up the possibility of attacks.

WormGPT isn’t the only way generative AI is causing problems for companies either. LLMs can be used to write malware automatically, carry out social engineering attacks, find vulnerabilities in software code, and even help in cracking passwords. Generative AI poses a threat to the enterprise as well, especially in terms of data leakage.

Generative AI has also seen a slow uptake by companies due to a lack of security infrastructure around this powerful technology. While cloud service providers have begun entering the burgeoning AI market, companies are still in need of a strong, security-first LLM offering. By educating the workforce on the dangers of generative AI, companies can protect themselves from data leakage. The dangers of AI-powered hack attacks must also be emphasized, so as to enable employees to spot potential cyberattacks.

Companies are falling behind on cybersecurity readiness, as evidenced by this survey which found that only 15% of organisations are deemed to have a mature level of preparedness for security risks. With the rise of generative AI, companies need to pour resources into keeping their workforces up to date with the latest threats in AI-powered cybersecurity.

The post WormGPT is a Warning for Enterprises to Upskill Their Employees appeared first on Analytics India Magazine.

Neural Networks and Deep Learning: A Textbook (2nd Edition)

Sponsored Post

By Charu C. Aggarwal

Neural Networks and Deep Learning: A Textbook (2nd Edition)

Table of Contents
Publisher Page from Springer (Buy PDF e-copy or hardcopy)
Amazon Page (Buy hardcopy)

The chapters of this book span three categories:

  • The basics of neural networks: The backpropagation algorithm is discussed in Chapter 2. Many traditional machine learning models can be understood as special cases of neural networks. Chapter 3 explores the connections between traditional machine learning and neural networks. Support vector machines, linear/logistic regression, singular value decomposition, matrix factorization, and matrix factorization-based recommender systems are shown to be special cases of neural networks.
  • Fundamentals of neural networks: A detailed discussion of training and regularization is provided in Chapters 4 and 5. Chapters 6 and 7 present radial-basis function (RBF) networks and restricted Boltzmann machines.
  • Advanced topics in neural networks: Chapters 8, 9, and 10 discuss recurrent neural networks, convolutional neural networks, and graph neural networks. Several advanced topics like deep reinforcement learning, attention mechanisms, transformer networks, large language models, Kohonen’s self-organizing maps, and generative adversarial networks are introduced in Chapters 11 and 12.

The second edition is substantially reorganized and expanded with separate chapters on backpropagation and graph neural networks. Many chapters have been significantly revised over the first edition. Greater focus is placed on modern deep learning ideas such as adversarial learning, graph neural networks, attention mechanisms, transformers, and large language models.

The hardcopy is available from most booksellers like Amazon, but the e-copy of the book is available only from Springer as a PDF. A Kindle edition will appear in the near future. The PDF contains Kindle-like hyperlinks for navigation and can be used both on Kindle and mobile devices like the iPad.

More On This Topic

  • Google’s Model Search is a New Open Source Framework that Uses Neural…
  • Deep Neural Networks Don't Lead Us Towards AGI
  • Explainable Forecasting and Nowcasting with State-of-the-art Deep Neural…
  • Deep Learning with Python: Second Edition by François Chollet
  • IBM Uses Continual Learning to Avoid The Amnesia Problem in Neural Networks
  • Learn Deep Learning by Building 15 Neural Network Projects in 2022

Common Sense Media, a popular resource for parents, to review AI products’ suitability for kids

Common Sense Media, a popular resource for parents, to review AI products’ suitability for kids Sarah Perez @sarahintampa / 7 hours

Common Sense, a well-known nonprofit organization devoted to consumer privacy, digital citizenship and providing media ratings for parents who want to evaluate the apps, games, podcasts, TV shows, movies, and books their children are consuming, announced this morning it will introduce another type of product to its ratings and reviews system: AI technology products. The organization says it will build a new rating system that will assess AI products across a number of dimensions, including whether the tech takes advantage of “responsible AI practices” as well as its suitability for children.

The new AI product reviews will particularly focus on those that are used by kids and educators, Common Sense notes.

The decision to include AI products in its lineup came about following a survey it performed in conjunction with Impact Research which found that 82% of parents were looking for a rating system that would help them to evaluate whether or not new AI products, like ChatGPT, were appropriate for children. Over three-quarters of respondents (77%) also said they were interested in AI-powered products that could help children learn, but only 40% said they knew of a reliable resource they could use to learn more about AI products’ appropriateness for their kids.

The new system will be designed with help from experts in the field of artificial intelligence and will also aim to inform forthcoming legislative and regulatory efforts around online safety for minors, the organization said.

Typically, Common Sense’s independent media ratings and reviews system, found at commonsensemedia.org, will provide an age-appropriateness rating alongside measures of how much positive or negative content the media may contain, across areas like positive role models and messages and diverse representation, or, on the more negative side, things like violence, vulgarity, scariness, drug use and more. It’s not clear how Common Sense plans to rate AI products under the new system.

However, in a position paper published in April 2023, Common Sense warned about several current issues with modern-day AI, including, often, their lack of “meaningful guardrails” that may make them dangerous for children. In addition, it warned that generative AI technologies, like OpenAI’s ChatGPT could be susceptible to bias in their training data, which includes large data sets like websites, books, and articles — like content scraped from the internet.

“Beyond concerns about copyright…the amount of data needed to train generative models almost guarantees that any generative model has been trained to some degree on biased information, stereotypes, and misinformation, all of which may be propagated or reproduced when producing output,” the paper had noted.

Common Sense did not share a timeframe as to when it expected its new AI rating and reviews system to launch, but stressed the urgency of building such a resource.

“We must act now to ensure that parents, educators, and young people are informed about the perils and possibilities of AI and products like ChatGPT,” said James P. Steyer, founder and CEO of Common Sense Media, in a statement. “It is critical that there be a trusted, independent third-party rating system to inform the public and policymakers about the incredible impact of AI. In recent years, a number of tech companies have put profits before the well-being of kids and families. We have seen this movie before, with the emergence of social media and the subsequent failure to regulate these platforms, and, unfortunately, we know how that version ends. Quite simply, we must not make the same mistakes with AI, which will have even greater consequences for kids and society,” he added.

In addition to its media ratings, the organization’s for-profit affiliate, Common Sense Networks, took inspiration from its kid-friendly recommendations with the launch of a streaming service called Sensical back in 2921. The service offers age-appropriate videos for children ages 2 through 10.

Common Sense Networks launches Sensical, a free, hand-curated streaming service for kids

Bard Debuts in Europe and Brazil Amid Privacy Concerns and Intensifying Competition

Alphabet Inc., the parent company of Google, is spreading its wings in the AI landscape by launching its AI chatbot, Bard, in Europe and Brazil. This expansion signifies Bard's most significant growth since its introduction in the UK and the US in March, escalating the competition with Microsoft's own AI chatbot, ChatGPT.

Generative AI models like Bard and ChatGPT, capable of emulating human-like responses to questions, are increasingly becoming common players in the technological market. However, Bard's launch in the EU hit a temporary snag when the bloc's primary data regulator, the Irish Data Protection Commission, cited privacy issues. The commission highlighted that Google hadn't adequately illustrated how Bard would protect European users' privacy, delaying the chatbot's EU rollout.

Google assures that it has since engaged with the concerned watchdogs, addressing issues related to transparency, control, and choice. Amar Subramanya, Bard's engineering vice president, clarified that users would have the option to opt out of data collection. While declining to comment on the possible development of a Bard app, Subramanya stated, “Bard is an experiment. We want to be bold and responsible.”

Innovation and Controversy Amid Rising AI Investment

Google has further enhanced Bard by introducing new global features. Notable additions include the chatbot vocalizing responses and reacting to image-inclusive prompts. “You can collaborate with Bard in over 40 languages, including Arabic, Chinese, German, Hindi and Spanish,” announced Jack Krawczyk, Google's senior product director, in a blog post. Users can also tailor Bard's response style, pin or rename conversations, export code to more places, and incorporate images in prompts.

Simultaneously, the AI industry is experiencing a substantial surge in investment, with billions being funneled into these potentially lucrative technologies. Even start-ups like Mistral AI, barely a month old, managed to secure an £86m seed funding round for developing and training large language models. High-profile tech figure Elon Musk also unveiled a new AI start-up named xAI, pooling in a team of engineers with experience from OpenAI and Google.

But while these advancements have sparked excitement, they've also ignited controversy. Google is currently facing a class-action lawsuit in the US over alleged misuse of users' personal data to train Bard. The claimants argue that Google's unauthorized data scraping from websites violated their privacy and property rights. Amid such concerns and potential drawbacks, the trajectory of AI development and adoption will be intriguing to follow.

What an AI App Store Means for Vertical AI Developers

What an AI App Store Means for Vertical AI Developers July 17, 2023 by Shahar Chen, CEO & Co-Founder of Aquant

As AI continues to advance, the industry will begin to see a distinction between two categories: horizontal and vertical applications. While horizontal models offer versatile solutions applicable across domains, vertical models cater to specific industries, providing specialized AI capabilities. This is where an AI App Store enters the picture, revolutionizing the way niche markets and vertical AI solutions thrive.

Imagine a dedicated platform where vertical AI vendors can effortlessly showcase and distribute their specialized software to a global audience. An AI App Store similar to what OpenAI is reportedly considering would serve as the perfect gateway, enabling these vendors to monetize their offerings and expand their customer base significantly. Just as the introduction of the iPhone and its App Store transformed the mobile industry, an AI app store has the potential to reshape how AI is accessed, applied, and monetized, fostering innovation and empowering users to harness the full capabilities of AI technology.

While OpenAI may be the first to launch an app store format for various AI models, Amazon recently made its entry into the foundational models marketplace with Bedrock, just two months ago. Additionally, AWS unveiled an API platform designed to assist customers in hosting generative AI models, including those developed by AI21 Labs, Anthropic AI, and Stability AI. So even if OpenAI’s plan to launch falls through, a similar model will likely emerge given the demand.

The benefits of an AI App Store for niche markets and vertical AI solutions are abundant. For developers, it would provide an unprecedented opportunity to monetize their creations. By having their specialized AI software readily available on the platform, developers can reach millions of potential users and generate substantial revenue. This financial support would further drive innovation, encouraging developers to continue refining and expanding their vertical AI solutions to better cater to industry-specific needs.

Meanwhile, for users and technology buyers, an AI App Store offers access to an extensive library of cutting-edge AI tools. This availability of diverse and specialized AI applications empowers users to drive innovation in their respective fields or specific use cases. Whether it's service, healthcare, finance, manufacturing, or any other industry, vertical AI solutions tailored to specific domains can deliver precise insights and transformative capabilities. An AI App Store ensures that users have a secure environment to discover, evaluate, and adopt reliable AI software, fostering trust in the efficacy and effectiveness of the solutions they choose.

Many companies are more inclined to partner with a smaller vendor but need help knowing where to look. Darren Elmore, GM of Service at Ricoh New Zealand, describes the benefits of partnering with a smaller, more niche AI vendor over a larger, more established technology company, “the ability to pivot and make changes at speed when needed can happen when not suffocating under multiple layers of bureaucracy and hierarchy which can happen with larger, more established organizations.” As AI continues to evolve and become more ubiquitous, it’s expected that vertical solutions will increasingly be of interest.

The democratization of AI software through an app store is poised to profoundly impact various industries. Niche markets that may have previously struggled to access or afford specialized AI solutions can now leverage the power of these tools to enhance their operations, improve decision-making, and gain a competitive edge. Small and medium-sized businesses, in particular, stand to benefit as they gain access to sophisticated AI capabilities that were once reserved for larger enterprises.

Moreover, the introduction of an AI App Store is likely to disrupt the current AI landscape, igniting healthy competition among vendors. Disruption often creates opportunities for new entrants and motivates existing players to innovate and differentiate themselves to remain competitive. This fierce competition will ultimately drive advancements in vertical AI solutions, resulting in improved performance, increased functionality, and expanded offerings. The end result is a win-win situation for both vendors and users, with continuous innovation pushing the boundaries of what AI can achieve.

An AI App Store represents a groundbreaking concept that has the potential to unlock the power of vertical AI solutions for niche markets. By providing a platform for developers to monetize their creations and reach a global audience effortlessly, an AI App Store fuels innovation and facilitates the democratization of AI. It empowers users with access to specialized AI tools, transforming industries and revolutionizing how businesses and individuals harness the capabilities of AI technology. As we embrace the future, an AI App Store stands as a pivotal catalyst for the growth, advancement, and widespread adoption of vertical AI solutions in the years to come.

About Shahar Chen:

Shahar is an entrepreneur and expert in service. Shahar brings more than 15 years of business and technical expertise in BtoB software, specifically SaaS service software. Shahar started Aquant with his co-founder Assaf with the mission of bringing a powerful AI solution to the field service world, as a way to revolutionize service for service teams and end users. Prior to starting Aquant, Shahar spent 14 years at ClickSoftware, where he served in various positions from sales and consulting to product innovation.

Related

LangChain is Garbage Software

LangChain is Garbage

Just because something is complicated does not mean that it is good and more intricate. LangChain is possibly the best example of this. At its base, the main offering of LangChain is an abstraction wrapper that makes it easier for programmers to integrate LLMs into their programs. While it provides a relatively simple interface for LLMs, many developers swear against using LangChain in a production environment due to its anachronisms.

Given the complexity of the software, developers are judging the intentions behind the existence of LangChain. A user of LangChain on a Reddit discussion, said that, “the thing with LangChain is that it solves the easy stuff you could do easily yourself, and didn’t put much thought around design and architecture in order to help you with the hard stuff.” It is over engineered

In a recent blog, Max Woolf also discussed the problems surrounding LangChain in depth. He argues that LangChain, instead of making things simpler, it makes simple things relatively more complex. “…with that unnecessary complexity [LangChain] creates a tribalism which hurts the up-and-coming AI ecosystem as a whole. If you’re a newbie who wants to just learn how to interface with ChatGPT, definitely don’t start with LangChain,” advised Woolf.

Similar thoughts are being expressed on HackerNews, where people are saying, “LangChain is garbage software”. LangChain as a project is a valuable resource in expediting people’s work, aiding them in understanding the inner workings of various processes. “It served as a kind of AI cookbook, offering recipes for different tasks.”

But later, the user said that once they identified a suitable approach, they planned to re-implement everything using the underlying components that LangChain was purportedly abstracting. At present, using LangChain was faster and more efficient than searching for and learning individual libraries and their APIs. They intended to continue using it primarily within notebooks for their ongoing projects, and not direct implementation, as they feel like black-boxes.

Woolf said that criticising open source software is not the intent, but arguing that it is good for beginners to dive into is not true. With an example of translating English to French, he elaborates how LangChain uses about the same amount of code as just using the official OpenAI library, except it incorporates more object classes for not much obvious code benefit. This makes the task of learning machine learning even more difficult as it brings in another step of learning LangChain before building AI tools.

The feeling of frustration is being felt among all developers. People on HackerNews, Twitter, and Reddit alike narrated their stories about how they were very excited to use LangChain on their projects but later realised that it’s a complete mess and stupidly complex. Eventually leading them to strip it out of their project completely.

It’s not that bad

Developers have stated that what NumPy and Pandas did for machine learning, LangChain has done for LLMs, greatly increasing their usability and functionality. By using LangChain, developers can empower their applications by connecting them to an LLM, or leverage a large dataset by connecting an LLM to it.

LangChain went very early into agents and has assembled a truly impressive variety of features there by now. Moreover, OpenAI released the feature of function calling into its base offering, something that is learned from LangChain. Though people argue that it has made LangChain as a separate tool redundant, the feat is still impressive.

When it comes to frameworks such as LangChain, it is similar to a natural evolution observed in many frameworks that strive to accommodate emerging technologies. Nevertheless, developers believe that in this particular instance of LangChain, the project’s ability to enhance itself beyond its original design limitations seemed improbable without a comprehensive rewrite or the risk of becoming obsolete compared to alternative frameworks.

Overall, LangChain seems to embody a complex and intricate approach, despite its relatively young age. Adapting it to meet specific requirements would result in substantial technical debt, which cannot be easily resolved as is often the case with AI startups relying on venture capital to manage such issues.

Ideally, API wrappers should simplify code complexity and cognitive burden when dealing with intricate ecosystems, given the mental efforts already required to work with AI. However, LangChain stands out as one of the few software pieces that introduce additional overhead in its prevalent use cases.

The post LangChain is Garbage Software appeared first on Analytics India Magazine.

H1 2023 Analytics & Data Science Spend & Trends Report

Partnership Post

H1 2023 Analytics & Data Science Spend & Trends Report
The All Things Insights and marketing analytics and data science community completed an extensive survey covering what executives are thinking, how they’re spending and the issues and opportunities they face.

Our analytics and data science community has a solid core of top-level leaders along with a fair share of front-lines operators. The community is spread across several different industries with a fair share of large organizations represented along with some smaller companies.

Respondents feel strongly that analytics and data science are becoming more integrated into both corporate and operational decision making. While recessionary winds keep blowing, and many feel that doing more with less is the status quo, the report indicates that the analytics and data science discipline continues to feel positive about its growth prospects. Over 60% of our community feels that influence has grown post-pandemic. Nearly half of the community is focused on growth.

So, overall, things are looking up. Regarding “spend,” respondents indicate that budgets are either flat or slightly up when looking in the rear-view mirror. And when considering the future, analytics and data science disciplinarians are budget-bullish. Organizations are clearly still pumping money into this space and as long as results are shown, there doesn’t seem to be any let up anytime soon. The future looks bright in terms of spend as nearly half of respondents are expecting spend to increase. Another 2/5 of respondents will remain constant with current budgets.

H1 2023 Analytics & Data Science Spend & Trends Report
A sample visualization from the H1 2023 Analytics & Data Science Spend & Trends Report

Thank you to our contributors:

  • Michael Bagalman, Vice President, Business Intelligence and Data Science for STARZ
  • Michelle Ballen-Griffin, Head of Data Analytics, Future
  • June Dershewitz, Board Member, Digital Analytics Association
  • Neil Hoyne, Chief Strategist, Google
  • Chuck Martin, Editorial Director, Informa Tech
  • Matthew Mayo, Editor-in-Chief, KDnuggets
  • Anu Sundaram, Vice President, Business Analytics, Rue Gilt Groupe
  • Steve Weiss, Content Manager, Data Science & Business Analytics, LinkedIn Learning
  • Sunny Zhu, ESG Data Analytics & Operations, Indeed.com

GET YOUR COPY NOW

More On This Topic

  • 2023 AI Index Report: AI Trends We Can Expect in the Future
  • KDnuggets Survey: Benchmark with your peers on industry spend and trends
  • 5 Key Data Science Trends & Analytics Trends
  • What To Expect for AI Quality Trends In 2023
  • Overview of the AI Index Report: Measuring Trends in Artificial…
  • Data Science and Analytics Career Trends for 2021

Expedia adds new AI features to improve your travel planning. Here’s how

Houses with destination pins

I've just got back from a 10-day break and can assure you that planning a trip is far from easy. From deciding where to go, how to get there, and what to do once you arrive, there are many moving parts that can complicate your arrangements.

It's for that reason that sites such as Expedia exist to help you navigate every step of the booking process. Now, Expedia is taking it up a notch further — and helping you book your next trip using artificial intelligence (AI).

Also: Could movie studios use AI to replicate an actor's image and use it forever?

In April, Expedia incorporated ChatGPT into its iOS app to enable conversational trip planning. Using this feature, users can get trip recommendations by simply asking open-ended questions with the chatbot.

Starting early next month, this conversational planning feature will also be coming to the Expedia Android app, which will add some new features, including the ability to save recommended activities to a Trip Planner.

Trip Planner will have all your recommended activities in one place, making it easier for travelers to view their options, according to Expedia. Later this summer, hotel recommendations featuring images, price ranges, and reviews, which are made in a conversation, will also be saved to the Trip Planner.

Also: Google Labs rolls out its 'AI-first notebook'. Here's what it can do and how you can try it

Users will also be able to revisit previous conversations and continue them at any time, so they can pick up their trip-planning activities from where they left off. Once again, this feature will arrive later this summer.

Meanwhile, the Hotels.com app is getting a smart-shopping feature that allows users to receive AI-powered recommendations, which are based on different factors, such as who's traveling, where, and for how long.

Artificial Intelligence

Wix’s new tool can create entire websites from prompts

Wix’s new tool can create entire websites from prompts Kyle Wiggers 7 hours

What does one expect from a website builder in 2023? That’s the question many startups — and incumbents — are trying to answer as the landscape changes, driven by trends in generative AI. Do no-code drag-and-drop interfaces still make sense in an era of prompts and media-generating models? And what’s the right level of abstraction that won’t alienate more advanced users?

Wix, a longtime fixture of the web building space, is betting that today’s customers don’t particularly care to spend time customizing every aspect of their site’s appearance.

The company’s new AI Site Generator tool, announced today, will let Wix users describe their intent and generate a website complete with a homepage, inner pages and text and images — as well as business-specific sections for events, bookings and more. Avishai Abrahami, Wix’s co-founder and CEO, says that the goal was to provide customers with “real value” as they build their sites and grow their businesses.

“The AI Site Generator leverages our domain expertise and near-decade of experience with AI to tune the models to generate high-quality content, tailor-made design and layouts,” Abrahami said. “We’ve conducted many tests and have had many in-depth conversations with users to be confident that we are delivering real value. That’s why we chose to do it now.”

Wix AI site generator

Image Credits: Wix

AI Site Generator might be Wix’s most ambitious AI experiment to date, but it’s not the company’s first. Wix’s recently launched text creator taps ChatGPT to give users the ability to generate personalized content for particular sections of a website. Meanwhile, its AI template text creator generates all the text for a given site. There’s also the AI domain generator for brainstorming web domain names, which sits alongside Wix’s suite of AI image editing, fine-tuning and creation tools.

The way Abrahami tells it, Wix isn’t jumping on AI because it’s the Silicon Valley darling of the moment. He sees the tech as a genuine way to streamline and simplify the process of building back-end business functionality, infrastructure, payments capabilities and more for customers’ websites.

Wix AI site generator

Image Credits: Wix

To Abrahami’s point, small businesses in particular struggle to launch and maintain sites, potentially causing them to miss income opportunities. A 2022 survey by Top Design Firms, a directory for finding creative agencies, found that nearly 27% of small businesses still don’t have a website and that low traffic, followed by adding “advanced” functionalities and cost, are the top challenges they face with their website.

AI Site Generator takes several prompts — any descriptions of sites — and uses a combination of in-house and third-party AI systems to create the envisioned site. In a chatbot-like interface, the tool asks a series of questions about the nature of the site and business, attempting to translate this into a custom web template.

ChatGPT generates the text for the website while Wix’s AI creates the site design and images.

Other platforms, like SpeedyBrand, which TechCrunch recently profiled, already accomplish something this. But Wix claims that AI Site Generator is unique in that it can build in e-commerce, scheduling, food ordering and event ticketing components automatically, depending on a customer’s specs and requirements.

Wix AI site generator

Image Credits: Wix

“With AI Site Generator, users will be able to create a complete website, where the design fits the content, as opposed to a template where the content fits the design,” Abrahami said. “This generates a unique website that maximizes the experience relevant to the content.”

Customers aren’t constrained to AI Site Generator’s designs, a not-so-subtle acknowledgement that AI isn’t at the point where it can replace human designers — assuming it ever gets there. The full suite of Wix’s editing tools — both manual and AI-driven — is available to AI Site Generator users, letting them make tweaks and changes as they see fit.

New capabilities focused on editing will arrive alongside AI Site Generator, too, like an AI page and section creator that’ll enable customers to add a new page or section to a site simply by describing their needs. The forthcoming object eraser will let users extract objects from images and manipulate them, while the new AI assistant tool will suggest site improvements (e.g. adding a product or changing a design) and create personalized strategies based on analytics and site trends.

Wix assistant

Image Credits: Wix

“The current AI revolution is just beginning to unleash AI’s true potential,” Abrahami said. “We believe AI can reduce complexity and create value for our users, and we will continue to be trailblazers.”

But there’s reason to be wary of the tech.

As The Verge’s James Vincent wrote in a recent piece, generative AI models are changing the economy of the web — making it cheaper and easier to generate lower-quality content. Newsguard, a company that provides tools for vetting news sources, has exposed hundreds of ad-supported sites with generic-sounding names featuring misinformation created with generative AI.

Cheap AI content also risks clogging up search engines — a future small businesses almost certainly don’t want. Models such as ChatGPT excel at crafting search engine-friendly marketing copy. But this spam can often bury legitimate sites, particularly sites created by those without the means or know-how to optimize their content with the right keywords and schema.

Even generative AI used with the best intentions can go awry. Thanks to a phenomenon known as “hallucination,” AI models sometimes confidently make up facts or spew toxic, wildly offensive remarks. And generative AI has been shown to plagiarize copyrighted work.

So what steps is Wix taking to combat all this? Abrahami didn’t say specifically. But he stressed that Wix uses a “bevy” of tools to manage spam and abuse.

“Our AI solutions are tried and tested on large-scale quality data and we leverage the knowledge gained from usage and building workflows across hundreds of millions of websites,” Abrahami said. “We have many models, including AI, that prevent and detect model abuse because we believe in creating a safe digital space where the fundamental rights of users are protected. ChatGPT has an inherent model to prevent inappropriate content generation; we plan to use OpenAI’s moderation.”

Time will tell how successful that strategy ends up being.