Stability AI Is Bleeding High-Profile Employees

After a bombastic expose on the state of affairs at Stability AI, the company is now bleeding top executives. The company behind Stable Diffusion has now lost David Ha, its head of research, and Ren Ito, its Chief Operating Officer. This comes after another high profile VP left the company just a few months prior.

Following the report and its accompanying backlash, it seems that Mostaque is eager to rejig the upper management of the company. When approached for comments regarding Ito’s firing, Mostaque stated that it was part of a ‘broader shake-up at the company’. Earlier this year, Christian Cantrell, the VP of Product for Stability AI, also left the startup to start his own.

The piece exposed Mostaque’s purported shady business practices, with an anonymous Stability employee stating, “What he is good at is taking other people’s work and putting his name on it, or doing stuff that you can’t check if it’s true.”

After a funding round late last year for $101 million, Stability is scaling up its workforce. The company has not only expanded its headcount to over 185, but has also hired Ty Walrod and Afraj Gill to head up growth efforts for the company. In a tweet, Mostaque emphasised on the need to ‘scale right’.

Stability AI has also come under fire for taking credit for the Stable Diffusion algorithm. Reportedly, the model was created by the University of Heidelberg, with the code being released in 2021. This means that Stability does not possess any ownership rights to the model.
The company is currently in the midst of updating Stable Diffusion, recently releasing Stable Diffusion XL v0.9. It also open-sourced StableLM, a GPT-like LLM which can be used for both commercial and research purposes.

The post Stability AI Is Bleeding High-Profile Employees appeared first on Analytics India Magazine.

Gartner: 81% of IT Teams Will Grow Despite AI Adoption

IT team growth survey concept.
Image: metamorworks/Shutterstock

Large enterprise CIOs — 81% of them — plan to increase their IT headcount in 2023, according to a new Gartner study.

“Even with advances in AI, Gartner predicts that the global job impact will be neutral in the next several years due to enterprise adoption lags, implementation times and learning curves,” said Jose Ramirez, senior principal analyst at Gartner, in a press release.

Jump to:

  • Why CIOs plan to increase IT headcount
  • The effect of AI on the job market
  • What CIOs look for as they hire IT roles
  • Other hiring trends include reskilling and fusion teams
  • Survey methodology

Why CIOs plan to increase IT headcount

Companies working on digital transformation rely on full-time IT workers to do it, Gartner reported. Full-time IT employees perform 56% of this work. Another 21% is done by IT contractors or part-time employees, 9% by IT consultants and 4% by IT gig workers.

IT leaders are being asked to take on large projects, and sometimes they do not have enough people to work on them. Of the CIOs surveyed, 67% said they plan to increase their IT headcount by at least 10% to help their company’s overall digital transformation.

“Enterprises have undertaken various digital initiatives over the past two years, with operational excellence and customer or citizen experience being the most popular,” said Ramirez in the press release. “Still, these initiatives often do not meet enterprise needs quickly enough.”

However, some large enterprises ran into problems: 41% reported hiring for IT roles has slowed, 35% said that their overall IT budget has decreased and 29% noted that there’s an IT hiring freeze in their organization.

The effect of AI on the job market

Gartner’s prediction that the global job impact of AI technology will be “neutral” acts as a counterpoint to predictions such as the University of Pennsylvania’s and OpenAI’s claims that AI will replace 20% of human workers. Of the large enterprise CIOs surveyed, only 4% said they use “AI-augmented workers” today.

The world of AI in the IT workforce is still developing, and some details have yet to be determined.

“While investments in AI technology and the need for AI skills are expected to grow significantly, there are concerns around the potential legal issues that may arise from generative AI, such as copyright infringement and confidential information breaches,” Ramirez added in an email to TechRepublic.

SEE: Many executives are increasing spending on artificial intelligence like ChatGPT. (TechRepublic)

Automation and AI-augmented work account for just over 9% of the work done under the IT purview today, the CIOs reported.

At the same time, 46% of CIOs plan to automate some or all of their workflow to free up IT time.

What CIOs look for as they hire IT roles

In response, the CIOs surveyed try to hire from wider geographic areas or to relax some hiring requirements, such as “hiring early-career technologists,” Ramirez said in the press release.

The most in-demand IT skills, according to CIOs at large enterprises, are:

  • Cybersecurity.
  • Cloud platforms.
  • Customer or user experience.

The most important elements that determine whether a person is qualified for the IT team are technical skills, soft skills such as communication and relationship management and the right fit for the company culture.

Other hiring trends include reskilling and fusion teams

Nearly half (47%) of the surveyed CIOs plan to invest in training programs to upskill and reskill IT staff. They want to do so in order to make sure the teams match up with the roles, soft and technical skills and capacity the enterprise wants in order to meet business objectives.

Of the surveyed CIOs, 46% plan to establish fusion teams. In fusion teams, stakeholders include technical and business personnel and focus on cross-disciplinary business success.

Ramirez stated in the press release that a blended team of technical and business stakeholders can “ensure that IT has relevant roles, skills and capacity to meet enterprise objectives.”

Survey methodology

The survey was conducted among 501 respondents; 183 of them were large enterprise CIOs at enterprises with a total annual revenue of $1 billion USD or more in the North America, EMEA and APAC regions. The survey ran from October through November 2022.

Subscribe to the Executive Briefing Newsletter

Discover the secrets to IT leadership success with these tips on project management, budgets, and dealing with day-to-day challenges.

Delivered Tuesdays and Thursdays Sign up today

DragGAN is Finally Open Source 

The much-awaited DragGAN code is now officially out. This code is developed based on StyleGAN3. A part of the code is borrowed from StyleGAN-Human.The code related to the DragGAN algorithm is licensed under CC-BY-NC.

Click here to check out the GitHub repository.

DragGAN is an image editing app that allows you to simply drag elements of a picture to change their appearance. A group of researchers from Google, alongside the Max Planck Institute of Informatics and MIT CSAIL, recently released DragGAN, an interactive approach for intuitive point-based image editing.

DragGAN operates by initially utilizing a Convolutional Neural Network (CNN) to extract features from an image. These features are subsequently employed to create a three-dimensional (3D) representation of the image. Subsequently, another CNN, which was trained on a dataset of human-modified images, is employed to modify the 3D model.

Using Drag your GAN, you can manipulate the dimensions of a car or even change facial expressions and rotate the image like any other 3D model.

DragGAN is in the race to give tough competition to Photoshop, which is now integrated with Firefly, as it does not have many technicalities associated with it and is user-friendly.

As compared to Diffusion Model, GAN models are more impactful than pretty pictures. While there are obvious reasons why diffusion models are gaining popularity for image synthesis, general adversarial networks (GANs) saw the same popularity, sparked interest and were revived in 2017, three years after they were proposed by Ian Goodfellow.

The post DragGAN is Finally Open Source appeared first on Analytics India Magazine.

Salesforce Launches Starter in India

Salesforce on Tuesday announced the launch of Salesforce ‘Starter’ for Micro, Small and Medium Enterprise (MSME)businesses in India.

Starter is an easy-to-use CRM that includes sales, service and email outreach tools in one suite, helping companies get started – so they have the tools to improve customer experiences, reduce costs, and drive revenue.

“SMBs are the backbone of any economy and faster adoption of digital technologies has helped them remain resilient in the face of adversity. It’s clear that effective use of technology can be a differentiator for SMBs, helping build relationships and establish a foundation for growth,” said Arun Parameswaran, MD – Sales & Distribution, Salesforce India.

Starter will help businesses by offering simplified signup, guided onboarding, and a new checkout flow that makes it easier to bring more users into Salesforce, the company said.

In addition, Starter includes built-in Einstein AI for Activity Capture to automatically help keep email and calendar info up-to-date.

Companies can enable users to view and act on unified customer data across a suite of sales and service applications.

Earlier this month, Salesforce announced the launch of AI Cloud, a suite of products built for CRM in enterprises where they can boost their productivity through all the Salesforce AI applications.

Salesforce also recently announced that it is expanding its Generative AI Fund, doubling the USD 250 million fund to USD 500 million as part of its continuing commitment to bolster the AI startup ecosystem and spark the development of responsible generative AI. Salesforce Ventures has already invested in AI firms such as Hearth, You.com, Anthropic and Cohere.

The post Salesforce Launches Starter in India appeared first on Analytics India Magazine.

Nvidia and Snowflake Integration Aims to Bridge the Generative AI and Data Security Gap

Nvidia and Snowflake Integration Aims to Bridge the Generative AI and Data Security Gap June 27, 2023 by Jaime Hampton

(cybermagician/Shutterstock)

As companies begin leveraging generative AI to build custom applications, ensuring their data is securely governed during the development process is proving to be tricky.

Data entered into foundation LLMs is not secure, as several high-profile leaks have shown. A pop-up now greets ChatGPT users upon login with the admonishment: “Please don't share any sensitive information in your conversations.” One recent incident occurred when Samsung engineers shared lines of confidential code with ChatGPT to troubleshoot a coding problem, which led the company to ban the use of such chatbots last month.

What if instead of bringing your data to generative AI systems, you could bring generative AI to your data? That’s the idea behind a new integration between Snowflake and Nvidia that will make it easier to build custom generative AI applications using proprietary data within the Snowflake Data Cloud.

Nvidia CEO Jensen Huang and Snowflake CEO Frank Slootman discussed this approach during a fireside chat Monday evening at the Snowflake Summit 2023 in Las Vegas.

“A large language model turns any data knowledge base into an application,” Huang said to the crowded room.

“The intelligence is in the data,” Slootman attested.

This partnership will give enterprises the ability to use data in their Snowflake accounts to make custom LLMs for generative AI uses like chatbots, search, and summarization. Proprietary data, which can sometimes range from hundreds of terabytes to petabytes of raw and curated business information, remains fully secured and governed with no need for data movement, Nvidia asserted in a release.

Snowflake CEO Slootman and Nvidia CEO Huang discuss the partnership during a fireside chat moderated by Sarah Guo. (Source: Snowflake)

During a press briefing, Manuvir Das, Nvidia’s head of enterprise computing, explained that Nvidia views the potential of LLMs as akin to professional employees with years of company-specific experience.

“If [a company] could start with a large language model, but really produce a custom model for themselves, that has all of the knowledge that is specific to their company, and that is endowed with skills that are specific to what that company's employees do, then that would be a better option than just a generic foundational model,” he said.

Das says the key difference between foundation models and custom models is rooted in an enterprise’s unique data, often stored in data lakes and warehouses. He notes that Snowflake’s data warehousing capabilities combined with Nvidia’s strengths in AI infrastructure and software have positioned the companies to significantly advance the creation of custom enterprise models.

Nvidia’s NeMo framework is an end-to-end platform for building custom models and seems to be a cornerstone of this project, as Snowflake plans to host and run NeMo in its Data Cloud where its capabilities will be integrated alongside NeMo Guardrails, a feature that allows governance and monitoring of AI model behavior. NeMo provides a library of pre-trained foundation models, ranging from 8 billion to 530 billion parameters, which Snowflake customers can use as a starting point for further training, Das noted in the press briefing.

Snowflake offers a host of industry-specific data clouds including those for manufacturing, financial services, healthcare and life sciences, media and entertainment, and government and education, to name a few. The companies assert their collaboration will further enable customers to transform these industries by bringing customized generative AI applications to different verticals with the Data Cloud. “For example, a healthcare insurance model could answer complex questions about procedures that are covered under various plans. A financial services model could share details about specific lending opportunities available to retail and business customers based on specific circumstances,” Nvidia said in a release.

Das noted that both Nvidia and Snowflake will share responsibility for the security of data used as training data. He said the integration work being done in this partnership is crucial for maintaining data security and is a key consideration. The NeMo engine will operate within the Snowflake Data Cloud which was designed to ensure computation on the data remains within boundaries set for each customer.

Nick Amabile, CEO and chief consulting officer at data consultancy DAS42 told Enterprise AI in an email interview that this announcement is big news for enterprises: “Yesterday’s fireside chat was all about how enterprises need to shift their thinking from ‘bringing data to their apps’ to ‘bringing their apps to their data.’ This partnership will drastically increase the speed of enterprises to develop, train, and deploy AI models enabling them to bring new experiences to their customers and better productivity to their employees.”

Amabile cautioned that businesses still need to carefully consider where and how AI can impact their business before deciding where to invest, which is an area where a consulting firm may be useful in unpacking the complexity of how these technologies can be used to drive business value.

Alexander Harrowell, principal analyst for advanced computing for AI at technology research group Omdia, said in a statement that this partnership represents a large opportunity in the burgeoning generative AI sector.

“More enterprises than we expected are training or at least fine-tuning their own AI models, as they increasingly appreciate the value of their own data assets,” he said. “Similarly, enterprises are beginning to operate more diverse fleets of AI models for business-specific applications. Supporting them in this trend is one of the biggest open opportunities in the sector.”

Related

TikTok rolled out a new way for creators to make money — but there’s a catch

Influencer with ring light on vertical phone

TikTok's platform has enabled creators to build large followings from short, vertical videos they post. As a result, brands are willing to pay creators a good amount of money for brand exposure to their audiences.

TikTok's new feature, TikTok Creative Challenge, is reimagining how these collaborations take place and making it easier to connect creators with brands to create ads and earn money.

Also: How to go live on TikTok (and how it can earn you real money)

The in-app feature allows creators to browse different brand ad postings, known as challenges, and submit their original video ad for the posting they choose.

After the submission process, the creator will be pinged with revisions if necessary, which the creator can appeal. Once approved, the ad won't be posted on the creator's profile. Instead, it will run as ads on TikTok's For You Feed.

The pay is a little less straightforward.

The "rewards" or payments on the video depend on several metrics including qualified video views and conversions. This means that even after working on and submitting a high-quality video, you may not get much of a payment at all.

Also: Want to create better TikToks and Reels? You need one of these ring lights

Another concern is how long the brand can run the ad. Although this aspect is not mentioned in the release, this is an important piece of information because, for many influencers, it could mean losing money.

For example, an influencer's rate can change depending on the number of followers they have, and what may seem like a good payment now could be different if the influencer blows up in followers overnight, making their image and likeness worth more.

Another factor is that the longer an ad runs, the more an influencer can typically charge for the content they are creating.

Also: YouTube is testing an AI voice-dubbing feature for creators, and it's incredibly fast

This campaign minimizes the amount of time a creator has to spend looking for and negotiating with brands, but it also reduces the amount of autonomy a creator has around the fees they charge.

To participate in the challenge, creators must be at least 18 years old, have a US-based account, and have at least 50,000 followers, according to the guidelines.

Artificial Intelligence

GitHub CEO: AI and software development are now inextricably linked

GitHub CEO: AI and software development are now inextricably linked Frederic Lardinois @fredericl / 9 hours

“AI and software development are now inextricably linked for the rest of our lives,” GitHub CEO Thomas Dohmke said today during a presentation at the Collision conference in Toronto, Canada. “In a world eaten by software, every developer deserves a co-pilot.”

In an interview after his talk, Dohmke expanded on this a bit when I asked him if he believes every developer will be using AI in the near future. “I think the obvious answer to that one is that the FOMO in companies is already so big that they are looking at the competition and asking themselves if their competitor has already adapted [GitHub] Copilot — and that means that that competitor has — and doesn’t really matter if it’s 20%, 30% or 40% — that competitor has an advantage.”

On top of that, he believes there is really no disadvantage for developers to use a tool like Copilot. “It’s just so natural. There’s really no reason to not use Copilot,” he said. “I think it’s becoming part of the standard toolset that every developer will be using. Ultimately, developers not using it will exist, the same way Cobalt developers still exist.”

He also noted that tools like Copilot will get integrated across the development lifecycle.

GitHub’s Copilot was among the first AI-based code completion services and remains the most popular, even as the likes of AWS CodeWhisperer and, most recently, Google’s Bard-based competitors are seeing some adoption from developers as well. As part of Dohmke’s talk, GitHub today also announced some of its latest findings on how Copilot is being used by developers.

One number that hasn’t changed is that GitHub still says that, based on its analysis of a sample of almost one million users, developers accept just under 30% of code suggestions — and the longer they use it, the higher their acceptance rate, with developers accepting closer to 35% of suggestions after six months of use. Those numbers, Dohmke believes, won’t change all that drastically in the near future, though he noted that 50% would “make us happy.”

Image Credits: GutHub

What’s maybe just as important is that Copilot is especially useful for less experienced developers (which GitHub defined by the average number of repository actions on GitHub prior to using Copilot).

“As developers continue to become fluent in prompting and interacting with AI, and new models that allow natural language to power the development lifecycle, we anticipate that 80% or more of code will be written with AI, helping democratize software development for more people,” GitHub’s report, co-written by Dohmke and Marco Iansiti and Greg Richards from Keystone.AI, explains.

Given the explosive growth in AI and the dearth of developers in this space, GitHub also notes that generative AI tools hold a lot of promise to make those developers more productive. The company expects that globally, generative AI-powered developer tools will add $1.5 trillion to the global GDP by 2030 and that each missing skilled developer will account for $100,000 in GDP loss. Meanwhile, generative AI developer tools can make up for about 15 million additional developers, GitHub believes (hence the $1.5 trillion in total impact). The company believes that’s a conservative estimate.

GitHub’s Thomas Dohmke talks open source, AI and more on the Disrupt SaaS Stage

Freshworks Adds Generative AI Capabilities To Supercharge Productivity 

Freshworks is the latest cloud-based SaaS giant to add generative AI capabilities to their platform through three new services – Freddy Self Service, Freddy Copilot and Freddy. The company is also planning to develop proprietary language models and integrate general-purpose LLMs to cater to specific customer requirements.

The new predictive and assistive generative AI capabilities embedded within Freshworks solutions and platform help support agents, sellers, marketers, IT teams and leaders maximise productivity.

Freshworks is leveraging Microsoft Azure OpenAI Service to ensure the privacy and security of customers’ data.

Freddy Copilot: Freddy Copilot aims to enhance productivity for support, sales, marketing, and developers by facilitating faster workflows through interactive prompts within Freshworks products. It enables users to perform their tasks efficiently and develop new applications to expand their capabilities. During beta testing, Freddy Copilot was adopted by 390 companies and reduced effort by up to 83%. Over 2,500 developers who are already using the Freshworks Developer Platform can now leverage Freddy Copilot to create innovative, high-quality apps at a quicker pace.

Freddy Self Service: Freddy Self Service empowers companies with the technology to deliver personalized automation on a large scale. It leverages Freshworks’ platform and a powerful language model to enable personalized automation that enhances agent productivity. By utilizing extensive language and account-specific models, Freddy Self Service handles a significant portion of L0/L1 queries from employees and customers within Freshdesk and Freshservice, delivering customized responses. This allows customer support and IT personnel to dedicate their efforts to higher-value projects and tasks.

Freshworks is hailed as the core of IQor’s digital customer and employee universe, according to Sergey Kolosovosky, IQor’s Senior VP of Application Development and Solutions. With 40,000 employees benefiting from Freshworks Freddy AI capabilities, IQor is excited to equip them with tools for daily functions and enhance engagement. The focus is on creating happy employees and delighted customers, as access to powerful generative AI software becomes a game changer for IQor.

Freddy Insights: Freddy Insights enables businesses to streamline operations and foster business growth. Freshworks’ generative AI analyzes customer and employee support data to automatically identify areas that require improvement. It also evaluates marketing and sales effectiveness and provides recommendations for optimizations that can enhance performance and boost revenue. Freddy Insights additionally offers proactive quality management, assessing support quality and ensuring that staff members meet established goals. It guides agents in improving their skills through every customer interaction.

Freshworks’ AI & Analytics Play

Back in March, Freshworks announced enhancements to Freddy AI powered by OpenAI’s ChatGPT and LLM GPT-4 consisting of a conversation summariser, rephraser, autocomplete, article generator, and email copy generator.

Freshworks was one of the first B2B companies to leverage cutting-edge tech.

In 2018, they launched Freddy AI, an AI assistant powered by their Neo platform, which leveraged Google Contact Center AI. In 2020-21, they trained their models using customer-agent conversations and improved their bot builder.

Read more: How Whatfix is Revolutionising SaaS with GenAI Integration

The post Freshworks Adds Generative AI Capabilities To Supercharge Productivity appeared first on Analytics India Magazine.

Nvidia teams up with Snowflake for large language model AI

nvidia-press-pre-briefing-snowflake-partnership-6-slide-6

At Snowflake's user conference in Las Vegas Monday, Snowflake Summit 2023, the cloud database maker announced a partnership with chip giant Nvidia that combines forces for processing so-called foundation models for AI.

According to the arrangement, Snowflake customers will be able to rent cloud GPU capacity in Snowflake's data warehouse installations, and they'll use that capacity to refine neural networks with Nvidia's NeMo framework, introduced last fall. Foundation models are very large neural networks, such as large language models, that are customarily "pre-trained" — that is, they have already been developed to a level of capability.

Also: AI has the potential to automate 40% of the average work day

A customer will use Snowflake's data warehouse to employ the customer's own data to develop a custom version of the NeMo foundation model to suit their needs.

"It's a very natural combination for the two companies," said Nvidia's vice president of enterprise computing, Manuvir Das, in a press briefing. Das continued:

"For Nvidia and Snowflake to get together and say, well, if enterprise companies need to create custom models for generative AI based on their data, and the data is sitting in Snowflake's data cloud, then why don't we bring Nvidia's engine for model making, which is NeMo, into Snowflake's data cloud so that enterprise customers, right there on their data cloud, can produce these models that they can then use for the use cases in their business."

Also: Databricks' $1.3 billion buy of AI startup MosaicML is a battle for the database's future

The announcement is part of a growing trend to employ AI, and especially generative AI, as a business tool. On Monday, Apache Spark developer Databricks stunned the tech industry with a $1.3 billion acquisition of startup MosaicML, which runs a service to train and deploy foundation models.

Snowflake will implement the service by procuring Nvidia GPU instances from the cloud service providers with whom it already works. "Now, we are just talking about an extension of that [relationship] to include GPU-based instances," said Das.

In a separate release on Tuesday, Snowflake said it will extend its Snowpark developer platform with what it calls Snowpark Container Services, currently in a private preview. The company is "expanding the scope of Snowpark so developers can unlock broader infrastructure options such as accelerated computing with Nvidia GPUs and AI software to run more workloads within Snowflake's secure and governed platform without complexity," according to the release, "including a wider range of AI and machine learning (ML) models, APIs, internally developed applications, and more."

In response to a question from ZDNET about how customer data would be protected in the arrangement between the two, Das indicated the main responsibility lies with Snowflake.

"Snowflake has a design construct to ensure that when a customer chooses to do computation on the Snowflake data cloud, it remains within the boundaries for that customer," said Das, "and then the NeMo engine just fits into that model."

Added Das: "Certainly, there is a responsibility for NeMo as well" for security, "and that's why it's joint engineering work."

Also: Nvidia unveils new kind of Ethernet for AI, Grace Hopper 'Superchip' in full production

The partnership follows a recent announcement by Nvidia with ServiceNow to use NeMo with ServiceNow's customers in IT services. Where the Snowflake arrangement is "general purpose," said Das, the ServiceNow partnership "is more the ISV (independent software vendor) sort of model." ServiceNow is using the NeMo code to train customer models for each of their customers, "so that when each of their customers does their IT work, and opens [trouble] tickets, they'll get responses that are specific to that customer."

Nvidia CEO Jensen Huang has positioned software as an important growth vector for his company, which makes billions selling GPU hardware to develop neural networks. NeMo is part of the enterprise software stack the company is promoting, in large part through partnerships with cloud providers.

In March, Nvidia CFO Colette Kress told investors at a Morgan Stanley conference, "Our software business right now is in the hundreds of millions [of dollars of revenue] and we look at this as still a growth opportunity as we go forward."

Also: AMD unveils MI300x AI chip as 'generative AI accelerator'

See also

The first GPT-powered smart home platform is here

Josh Remote

With generative AI taking over the artificial intelligence world, it was only a matter of time before it came to the smart home. Josh.ai, a home automation system for the connected home, has officially launched JoshGPT.

Josh is here to replace your smart home automation system as your all-in-one solution — it says it's got the brains that your current voice assistant can't offer you.

Also: Smart home starter pack: 5 devices that will make your life easier

JoshGPT is powered by the same technology that powers OpenAI's ChatGPT, allowing it to answer more specific questions and nuances that smart home assistants like Siri, Alexa, and Google can't understand.

For example, you may ask Alexa or your favorite smart home assistant this question:

"Explain how TV screens work."

But with JoshGPT, you can add variables like "Explain how TV screens work like I'm five" or "As if I've never seen one before."

Also: The best smart home devices

You can also customize shopping requests, activities, dining, and other general questions with specifics, or you can ask Josh to do more than one thing at a time, like three or four questions in a row.

"As the first company to offer the convenience of hands-free access to generative AI, Josh.ai is delivering a supercharged assistant at home and on the go for our clients," said Alex Capecelatro, CEO of Josh.ai.

The Josh.ai system is an exclusive experience that can only be set up with professional installers in customer homes and can reportedly cost anywhere from $4,000 to $14,000. The system includes two location-aware Josh Nano and Micro microphones and a handheld Josh remote, all brought together by the Josh app.

Also: The best home automation systems

With the public launch of JoshGPT, the Josh.ai home automation system also gets new Intelligent Areas that will group devices and rooms to offer a more customized experience in the automated home, but a lot of this will ensure customization and setup will have to be done by the installers.

"We are proud to introduce Intelligent Areas, System Setup, and Room Customization as innovative features that represent the culmination of feedback from our network of more than 1,400 certified dealers," said Capecelatro.

Installers will use the System Setup to configure the rooms and devices for the Intelligent Areas. They'll also be able to define the audio and video start-up volumes and lights, shades, and other devices.

Artificial Intelligence