Killed By Amazon (Part 1)

In July 1995, an online store emerged selling the world’s largest collection of books to anyone with World Wide Web access, and the rest is history. Amazon has been the go-to online delivery service for over half of the world’s online shoppers.

The company’s execs have been vocal about Amazon’s “fail fast, fail often” culture. While CEO Jeff Bezos has created a dominant global online retailer and cloud computer, he’s led the creation of many duds, too. He famously called the e-commerce giant “the best place in the world to fail” in his 2016 shareholder letter.

At Amazon, multiple products came to an end, some due to slowing sales and the rest due to the company’s shift in focus. Here is your first tour of the ‘Amazon Graveyard’.

  1. Auctions and zShops

Similar to eBay, Amazon once ran an auction site called Amazon Auction. It debuted in 1999 and ended a few years later. The service was replaced by Amazon Marketplace.

  1. Early Reviewer Program

Launched in 2018, the Early Reviewer Program was introduced to boost the review count under the supervision of Amazon. Discontinued in April 2021, the company pushed third-party sellers to Vine and the “Request a Review” button.

  1. Kindle Fire HDX

Launched in 2013, the Fire HDX was a favourite but it was discontinued in 2015 and replaced by a line of Fire HD products.

  1. Amazon Fire TV Recast

Released in 2018, Amazon’s Fire TV Recast, was a DVR that allowed cord-cutters to record and watch shows aired by TV antenna. The company announced to pull the plug on the product in 2022 though it will continue to offer software security updates till 2026.

  1. Fire Phone

In 2014, almost two months after it started selling, the phone was heavily criticised for its lack of features and soaring price. And even though Amazon reduced the price from $200 to just $0.99 with a two-year contract, the phone couldn’t be saved and was discontinued about a year later.

  1. Amazon Honor System

Amazon’s Honor System was launched in 2001 to allow customers to make donations or buy digital content, with Amazon collecting a percentage of the payment plus a fee. The service was discontinued in 2008 and replaced by Amazon Payments.

  1. Amazon Music Storage

The Amazon Music feature lets users upload their MP3 files from other sources, but the company ended its Amazon Music Storage subscription service in January 2019.

  1. Amazon Halo

Earlier this year in April, Amazon decided to axe its health-tracking bracelet. Released in 2020, Halo was combined with an app that tracked users’ activity, body fat and emotional state, and was integrated with Amazon’s Alexa digital assistant.

  1. Amazon Glow

Just over a year after its debut, Amazon killed its kids-focused video device that used projectiontechnology to create a virtual space on the tabletop.

  1. Kindle MatchBook

Launched in 2013, Amazon entirely shut down the digital bundling platform in October 2019. The program gave the ability to authors and publishers to provide a heavily discounted ebook, when a reader bought a hardcover or paperback.

  1. Amazon Video Direct

Amazon Prime Video Direct (APVD) was the only mainstream service that accepted unsolicited film submissions. Until 2021, after which, it no longer accepts submissions for non-fiction and short-form content.

  1. Amazon Prime Pantry

Launched in 2014 Amazon discontinued its Amazon Pantry (originally known as Prime Pantry) service, instead rolling those household goods and shelf-stable pantry items into the main Amazon website where they can be ordered alongside the rest of Amazon’s products.

  1. Amazon Drive

In October 2022, the Amazon Drive app was taken down from the iOS and Android app stores. As of February 1, 2023, Amazon no longer supports uploading files on the Amazon Drive website. You will still be able to view and download your files until December 31, 2023.

  1. Amazon Elements

On 21 January 2015, Amazon discontinued its prime-only Elements diapers in just six weeks.

  1. Amazon EC2

In July 2021, Amazon Web Services announced it will shut one of its oldest cloud computing services, EC2-Classic. The company also warned the remaining users to move off to avoid application downtime.

  1. Kindle Newsstand

From March 9, 2023, you can no longer subscribe to any publication on Kindle. Amazon notified subscribers in an Amazon post.

  1. Amazon Local

Amazon Local was also shut down in 2015. A site similar to Groupon, Local shut shop as the other site also saw high rises and subsequently falls.

  1. Amazon Go

Earlier this year, Amazon permanently closed eight of its high-tech Amazon Go convenience stores, including two in Seattle.

  1. Amazon 4-star

In March 2022, Amazon announced it would close all of its 4-star, Books and Pop Up stores, Reuters reported.

  1. Amazon Whole Foods Market

Whole Foods, a wholly owned subsidiary of Amazon, announced in early 2019 shutting down the business in India completely last year.

  1. Wag

Amazon acquired Quidsi in 2010, which expanded into Soap.com, Wag.com, BeautyBar.com, Casa.com, and YoYo.com. In 2017, Amazon shut down Quidsi as it was never profitable.

  1. Amazon Destinations

Amazon also briefly owned a hotel-booking website called Amazon Destinations which was intended to plan quick getaways. It didn’t last long though. Released in April 2015, it was gone by October of the same year.

  1. Amazon Dash Button

Amazon stopped selling the Dash Button, a small wireless device connected to WiFi which instantly ordered pre-selected items on Amazon, in 2019, but apparently they were a success since they got customers used to not shopping with a screen.

  1. Amazon Tap

The first Amazon Echo device was discontinued by Amazon without a replacement, the Amazon Tap was a mobile version of its Alexa-enabled smart speakers which the company stopped selling near the end of 2018.

  1. Pop-up stores

Amazon will close all 87 of its pop-up stores and discontinue the program, it told Business Insider in 2019. The stores were a place where customers interested in smart gadgets could see how they worked before purchasing.

  1. Instant Pickup

In 2017, Amazon introduced pick up for items within minutes of ordering them with Instant Pickup. However, the company ended its service in 2018 without specifying the reason.

  1. Amazon Screenwriter

In May 2019, Amazon Studios introduced Amazon Storywriter and Storybuilder only to discontinue them on June 30, 2019 with accounts disabled and not downloaded content becoming inaccessible.

The post Killed By Amazon (Part 1) appeared first on Analytics India Magazine.

6 Google Bard enhancements worth exploring

Google Bard opened on a laptop in Dark mode
Image: Andy Wolber/TechRepublic

Bard, an experimental chat system from Google, provides responses to prompts. It conveys content in conventional language, unlike a Google keyword search that delivers a result built around links. Additionally, Bard supports a series of related prompts, much as you might converse with a colleague, which also differentiates it from a standard, single-string keyword search.

SEE: Make sure your business is employing artificial intelligence ethically.

Initially launched with a waitlist and limited set of features, Google has iterated quickly to expand Bard availability and enhance Bard functionality, as detailed below.

Jump to:

  • Bard available in 3 languages, 180 countries and with Workspace accounts
  • Send Bard responses to Gmail or Docs
  • Try Bard for programming or Google Sheets functions
  • Toggle Bard’s dark and light theme
  • Switch to ‘Google it’
  • Compare responses with View other drafts

Bard available in 3 languages, 180 countries and with Workspace accounts

As of May 2023, Bard supports three languages — U.S. English, Japanese and Korean — and may be accessed in more than 180 countries by people who are at least 18 years old. To use Bard, you’ll need to sign in to Bard with a Google account.

If you have a Google Workspace account, you may sign in to Bard only if your Workspace administrator chooses to allow Early Access Apps (Figure A). The terms and privacy agreements for Early Access Apps are different from standard Workspace, so some administrators might opt not to permit Bard usage.

Figure A

Google Workspace administrator settings for Early Access Apps permissions
A Google Workspace administrator may permit access to Bard for people in their organization by enabling both Early Access Apps and Core Data Access Permissions in the Admin Console.

In many cases, an administrator will want to sign in to the Workspace Admin Console, then select Apps | Additional Google Services | Early Access Apps. There, they can adjust the setting to ON and check the box to allow Core Data Access Permissions. Once those two changes have been made, people in the organization may sign in and experiment with Bard with their Google Workspace account.

Send Bard responses to Gmail or Docs

Google makes it straightforward to move a Bard response to a new document or email. Select the Export response button, which displays as an upward-pointing arrow above a line, then choose either Export to Docs or Draft in Gmail (Figure B).

Figure B

Export option highlighted in Google Bard
For further use of Bard response, select the up arrow button, and choose either Export to Docs or Draft in Gmail.

In the first case, the system uses your prompt as the title of a new Google Doc and places the response in the document, whereas in the second the prompt becomes the email subject, with the response in the body of the email. After you select either option, you may choose to open the respective document or email draft directly.

Try Bard for programming or Google Sheets functions

In late April 2023, Google announced that Bard added the ability to generate, explain and help debug code in several programming languages, including C++, Go, Java, JavaScript, Python, C, C#, R, Swift, Kotlin, PHP, Bash, Perl, Ruby, Lua and Rust. Additionally, Bard can help generate HTML, CSS and SQL.

People who use Google Sheets may turn to Bard for help writing functions. For example, you might prompt Bard:

Can you provide me with a Google Sheets formula to identify the day of the week from an entered date?

Bard provides the function you need along with a detailed explanation (Figure C).

Figure C

Google Bard-provided Google Sheets function
Bard can not only assist with programming but also help devise Google Sheets functions.

Toggle Bard’s dark and light theme

People who prefer a dark display may switch to that with a click in the lower-left corner of the screen, as indicated by the oval in the lower left of Figure D.

Figure D

The options to Google it and switch to dark mode highlighted in Google Bard
Select the Google it button to reveal a few related topics. Then, select one to open a new Google search with the chosen terms. Separately, you may toggle between dark and light themes in Bard, as indicated in the lower left.

While this may seem a relatively simple option to provide, keep in mind that Google Docs on the web has yet to offer dark mode support. The fact that Google so rapidly added a dark theme to Bard may indicate how focused the company is on making sure Bard appeals to programmers.

Switch to ‘Google it’

You always have the option to select the Google it button after you receive a Bard response, as shown in Figure D. The Google it option takes your most recent prompt, reformulates it for search, then displays one or more Search related topics options. Choose any of those, and the system opens to a new Google search result page for the selected keywords.

Compare responses with View other drafts

Some responses offer a View other drafts option, selectable in the upper right corner of the response. Typically, the system lets you select from three options, as shown in Figure E: The response initially presented along with two others.

Figure E

The View other drafts option in Google Bard
The View other drafts option now seeks to provide drafts in varied formats, while the regenerate drafts button triggers the system to create a different response to your prompt.

The drafts may vary in format, content and structure. Google has modified the feature to provide greater variety among the drafts. Additionally, a regenerate drafts button displays to the right of drafts, as indicated by the rectangle in Figure E. When you regenerate drafts, Bard creates a new response and a new set of drafts to the prompt.

Thanks to an update in mid-May 2023, some drafts cite sources via links at the bottom of the response. These links make it easier to verify that web pages from which the draft data was created contain content you trust. However, not every draft response includes these source links.

Mention or message me on Mastodon (@awolber) to let me know how you use Bard!

TechRepublic Premium

TechRepublic Premium Exclusives Newsletter

Save time with the latest TechRepublic Premium downloads, including original research, customizable IT policy templates, ready-made lunch-and-learn presentations, IT hiring tools, ROI calculators, and more. Exclusively for you!

Delivered Tuesdays and Thursdays Sign up today

Nearly 60% of workers believe AI could help prevent burnout

Worried person in dark suit sitting at office desk with laptop and notepad being overloaded with work.

Employee burnout has been a hot topic over the last few years. Now, according to a recent survey, 58% of workers believe that artificial intelligence can help alleviate burnout and improve job satisfaction.

A report based on a survey of 6,400 workers from around the world explains that almost a third of employees are feeling the strain and burnout that results from layoffs and hiring freezes. These have come as a result of a grim economic landscape, thanks, in part, to inflation.

Also: ChatGPT outperforms money managers, as Americans flock to AI for investing advice

As the use of generative AI becomes widespread through tools like OpenAI's ChatGPT, Bing Chat, and Google Bard, interest in automating tasks through technology grows.

The report, created by UiPath, says workers from Gen Z, Millennials, and Gen X together comprise the "automation generation," which is particularly receptive to the potential of automation and improving job performance through the use of AI tools.

Also: 6 major risks of using ChatGPT, according to a new study

Out of the automation generation, Gen Z is by far the most receptive to adopting AI-powered tools in the workplace, with 69% agreeing that it would make their jobs better. 63% of Millennials and 51% of Gen X feel similarly. In contrast, only 44% of baby boomer respondents have favorable views on AI in the workplace.

With over half of the respondents believing that AI-powered automation can improve job fulfillment, the report also states that 57% view employers that adopt AI-powered tools more favorably than those who do not.

Also: Meet the post-AI developer: More creative, more business-focused

Generative AI tools, like AI chatbots and art generators, can help support employees and modernize operations.

The surveyed workers that support the implementation of AI in the workplace believe these tools would provide more flexibility in their work environment, more time to focus on critical tasks, and more opportunities to learn new skills.

Also: Future ChatGPT versions could replace a majority of work people do today

Employees understand the task at hand; in the U.S. alone, 50% of the respondents believe that increased productivity could help them keep their jobs. Because of this, 44% of the workers want to contribute to the creation and adoption of AI tools in the workplace.

Artificial Intelligence

Most Americans think AI threatens humanity, according to a poll

Illustration of AI hologram

Fearing the unknown is an integral part of the human experience, especially when the unknown is something as powerful as generative AI. Therefore, as generative AI models grow in popularity, so do the concerns around them.

A new Reuters/Ipsos poll showed that more than two-thirds of Americans have concerns regarding AI risks.

Also: 6 harmful ways ChatGPT can be used by bad actors

The poll also revealed that fears have grown beyond a simple concern. Out of the 4,415 U.S. adults polled, 61% believe that AI is a threat to humanity, nearly triple the amount of respondents who didn't foresee it being a threat.

These fears could be partly inspired by sci-fi movies, which have typically depicted AI as a threat — cue The Terminator.

However, many of the fears are also rooted in genuine risks of generative AI models such as ChatGPT.

Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified at a Senate Judiciary Committee hearing this week to address AI risks and call for regulation.

Also: I asked ChatGPT, Bing, and Bard what worries them. Google's AI went Terminator on me

AI pioneers and Turing Award winners Geoffrey Hinton and Yoshua Bengio have both spoken out publicly about the risks of AI and the need for immediate regulation.

In an interview with the Financial Times published on Wednesday, Bengio shared that the AI race has become "unhealthy" and a "danger to political systems, to democracy, to the very nature of truth."

Bengio also encourages people, such as AI experts, regulators, and lawmakers, to take action.

Also: ChatGPT and the new AI are wreaking havoc on cybersecurity in exciting and frightening ways

"Right now there is a lot of emotion, a lot of shouting within the wider AI community. But we need more investigations and more thought into how we are going to adapt to what's coming," said Bengio in the interview. "That's the scientific way."

Artificial Intelligence

Alibaba to spin off its cloud, AI and business messenger unit

Alibaba to spin off its cloud, AI and business messenger unit Rita Liao 15 hours

Seven weeks after Alibaba announced its historic restructuring plan to split itself into six independent companies, the juggernaut is gearing up to spin off its intelligence group.

Alibaba went public in New York back in 2014, marking the largest IPO at the time. Not long after Hong Kong relaxed rules around dual-class structures, which allow founders to retain certain control while opening the company to outside investment, in 2019, Alibaba sought a secondary listing in the city. Rising tensions between the U.S. and China also prompted many Chinese companies to retreat from the Nasdaq and NYSE in recent years.

“We are taking concrete steps towards unlocking value from our businesses and are pleased to announce that our board has approved a full spin-off of the Cloud Intelligence Group via a stock dividend distribution to shareholders, with intention for it to become an independent publicly listed company,” Daniel Zhang, chairman and chief executive officer of Alibaba Group, announced in the firm’s earnings report today. Zhang is also one of the cloud arm’s board of directors.

Alibaba aims to complete the spinoff in the next 12 months and plans to include external strategic investors in the group through private financings.

The cloud business generated $2.7 billion in revenue during the first quarter, making up 9% of Alibaba’s total revenues. (My colleague Alex has a financial deep dive into the cloud spinout. Stay tuned for the story.)

Marrying AI and cloud

You might not be familiar with Alibaba’s cloud intelligence group, but think of its main product lines roughly as “AWS+Slack+OpenAI”.

Its cloud business Alibaba Cloud dominates China’s market. Globally, Alibaba Cloud was the third largest infrastructure-as-a-service (IaaS) public cloud provider in 2021, according to market research firm Gartner. Add platform-as-a-service (PaaS) and private cloud to the mix, Alibaba came in fourth in Q4 2021, according to another market insight firm Synergy Research Group.

Alibaba’s Dingtalk, an enterprise chat app and productivity platform, surpassed 600 million users as of Q3 2022, with 15 million paid daily active users and 23 million enterprise users, the company said previously.

Tongyi Qianwen, Alibaba’s flagship large language model, is currently nowhere near GPT-3’s technological prowess and influence, but it’s one of China’s most promising alternatives to the text-generating AI. It also has the advantage of being applied to an array of Alibaba products. In fact, the integration has started, first with a copilot for Dingtalk.

It makes sense that Alibaba is grouping its cloud business and AI research team under one umbrella as these two go hand in hand. With each new breakthrough in AI, the amount of computational power needed to train data increases exponentially — so does the cost.

Interestingly, Alibaba mentioned in its quarterly report that it’s working to make cloud computing “more accessible and affordable.”

“We announced a new instance family that provides the same level of stability and offers up to 40% cost savings. For existing products, we reduced the prices of some of our core utility products, including computing, storage, networking and security products, by up to 50%,” the company said.

The timing seems apt. Earlier this week, Beijing unveiled a draft policy calling on cloud providers to work more closely with AI firms and support them with all the computing resources they need.

“We believe these moves will help our customers increase public cloud adoption in China as well as unlock emerging opportunities to leverage AI technology for enterprises,” Alibaba said.

Beijing calls on cloud providers to support AI firms

From Leader to Laggard: How Google Lost Its AI Mojo

Google founder Larry Page envisioned the search company as a great AI platform whose mission would remain unfulfilled until its search engine becomes AI-complete. However, it seems that the enormous ambition of the former co-founder has fizzled out, considering the current state of Google in the AI space. The once-market leader in AI is now playing second fiddle to OpenAI, looking to replicate the latter’s success by aping its products.

The main focus of Google I/O this year was Bard, a conversational agent with access to the Internet and a direct shot at OpenAI’s ChatGPT. In a mad rush to get Bard to the market, the Mountain View giant seems to have ignored its rich history of AI innovation, leaving them behind as another product killed by Google.

Code Red gone wrong

In December last year, ChatGPT’s success raised a furore at Google, prompting CEO Sundar Pichai to declare a Code Red for Google Search. According to a source at the company, executives believed that the chatbot could hamper Google’s ads business, which function mainly through Google Search.

This then prompted a frenzied rush to create a competing chatbot, resulting in the release of Bard in February. In classic Google fashion, the tech giant forgot all about their previous endeavours in AI and chased the shiny new chatbot trend.

In actuality, Google’s AI efforts started even before the founding of OpenAI, built on their treasure trove of user data. Google first began dabbling in machine learning in the early 2010s, with the first big announcement coming in the form of Google Now in 2012.

Google Now used machine learning to recommend news articles based on their search history. This then evolved to the Google Assistant, which used voice recognition and built on the legacy of Google Now to be a digital assistant to users. This slowly transitioned into a company-wide strategy.

Sundar Pichai, the CEO of Google, stated in a quarterly financial results call in 2015 that “Machine learning is a core, transformative way by which we’re rethinking how we’re doing everything. We are thoughtfully applying it across all our products, be it search, ads, YouTube, or Play. And we’re in the early days, but you will see us — in a systematic way  –  apply machine learning in all these areas.”

Fast forward to today, and this strategy has come to fruition. Google Search uses the PageRank algorithm for determining top spots, Google Ads use responsive ads for better targeting, and YouTube’s recommendation algorithms set the standard for content recommendations.

The company also has a rich history of AI research, stemming from its Google Brain division, now integrated into DeepMind. The team behind Google Brain included AI superstars like Andrew Ng, Samy Bengio, Ilya Sutskever, and Geoffrey Hinton. This team was responsible for the research behind Google Translate, GANs, and transformers —— the ’T’ of ChatGPT.

However, it seems that Google forgot about these strategies and advancements with the runaway success of ChatGPT.

Google Brain drain

There is also a long list of people leaving Google behind to start their own AI companies. Dario Amodei, the CEO of Anthropic, worked as a senior research scientist at Google Brain. Cohere was also founded by Aidan Gomez and Nick Frosst, both leading researchers at Google Brain.

Andrew Ng, one of the co-founders of Google Brain, now works closely with OpenAI and is the founder and CEO of deeplearning.ai. More recently, Geoffrey Hinton, the so-called godfather of AI, recently left Google, citing concerns over AI risk.

This has left Google in a sticky situation, bleeding AI talent in the midst of a recession. It is also unable to capitalise on the AI needs of the day, and its future is looking bleak as well. In a leaked document, a senior Google researcher stated that Google has “no moat” when it comes to AI. This document also conceded defeat to the open source community, while putting OpenAI in the same boat as itself.

However, the difference between Google and OpenAI in this scenario is that OpenAI releases its products to market first, and then optimises them based on users’ experience. Google, on the other hand, is making a lot of noise on using AI, but the products are being released at a trickle. According to reports, executives mentioned AI 143 times over the course of the two-hour keynote. However, the only product that was opened to the public after the keynote was Google Bard, which the company reminds its users is ‘experimental’.

Even as Google sets the stage for their ‘bold and responsible AI’ products ‘coming soon’, it seems that they have already fallen behind in the AI race. Ignoring their rich history of bringing AI innovation to the market, Google is running behind the shiny new thing, seeming to miss the forest for the trees.

The post From Leader to Laggard: How Google Lost Its AI Mojo appeared first on Analytics India Magazine.

OpenAI just released an official ChatGPT app for the iPhone

OpenAI's official ChatGPT app for the iPhone

ChatGPT users who'd been looking for an official mobile app can now grab one courtesy of OpenAI, which today launched an iOS app for its popular AI chatbot. Designed for free ChatGPT users as well as paid Plus subscribers, the app provides the same capabilities as the website, letting you enter your requests and prompts and receive AI-generated information in return. But the mobile app also offers a couple of extra benefits.

Using OpenAI's open-source Whisper speech recognition system, the app allows you to speak your queries rather than enter them by tapping away at your keyboard. The app will also sync your chat history across your computer and mobile device so you can pick up from a previous conversation. Those of you with a paid ChatGPT Plus subscription can also use the app to gain early access to new features and tap into GPT-4 with its enhanced capabilities.

OpenAI touted the app as offering a variety of features, including instant answers that try to provide precise info without saddling you with ads, tailored advice on specific interests and hobbies, creative inspiration to generate content, professional input to assist you with complex ideas and technical subjects, and learning opportunities to help you explore new languages and other areas.

The launch of the new app has kicked off in the U.S. and will expand to other countries in the coming weeks, according to OpenAI. And what of Android users? The company teased that an Android version of the mobile app is coming soon.

To download the app on your iPhone, grab it from Apple's App Store. After opening the app, log in with your ChatGPT account or tap the Sign up button if you don't yet have an account. At the main screen, you can then type or speak your prompt at the Message field and wait for the response. An ellipsis icon at the top displays a menu with options to rename or delete a chat, access your chat history, start a new chat, and tweak the app's settings.

Microsoft’s Struggle is NVIDIA’s Strength 

NVIDIA Trying to Keep AI Chatbots’ Hallucinations ‘On Track’

At Knowledge 2023, the California-based software company ServiceNow and NVIDIA recently announced its partnership to develop custom AI models for various functions of the enterprise, starting with IT workflows and business automation.

“As the adoption of generative AI continues to accelerate, organisations are turning to trusted vendors with battle-tested, secure AI capabilities to boost productivity, gain a competitive edge, and keep data and IP secure,” said CJ Desai, president and COO of ServiceNow.

Jensen Huang, founder and chief of NVIDIA said that IT is the nervous system of every modern enterprise in every industry. He believes that this collaboration to build super-specialised generative AI for enterprises will boost the capability and productivity of IT professionals worldwide using the ServiceNow platform.

This new development comes against the backdrop of scepticism that exists in the enterprise and the IT landscape, particularly related to the usage of foundational models developed by OpenAI and Microsoft – the likes of GPT-4 and CodeX, which have been trained on public-domain data to deliver the desired outcomes.

To make the matter worse, there has been a class action lawsuit filed against Microsoft, OpenAI, and GitHub for scrapping the licensed code to build AI-powered Copilot in November last year. This has been one of the biggest roadblocks for the company, and it is now desperately looking to escape – asking the court to dismiss a proposed class complaint.

In our previous interview with the VP of data strategy and AI at Infosys, Gary Bhattacharjee, told AIM that code IP is a challenge that needs to be addressed. He said that GPT is trained with everything they could find on the internet, including open-source codes.

Read: Infosys AI Head Shares the Struggle of Using Generative AI

The scepticism is real. It is mostly around the misuse of internal data and leakage of sensitive information for enterprises. The trust issue for using OpenAI or Microsoft platforms is growing day by day.

For example, Samsung has been pretty vocal about banning ChatGPT after the blunder created by its employees, where they used it to troubleshoot proprietary code and summarise internal meeting notes. Now, the company is looking to ditch ChatGPT forever and make its own version of an LLM-powered chatbot to prevent further mishaps from occurring.

Besides Samsung, several companies, including Amazon, Goldman Sachs, Bank of America, Wells Fargo, and others have also restricted employees from using the chatbot over the fear of sharing confidential information.

While safety concerns remain a top priority for companies, accuracy is also one of the biggest concerns for enterprises, as most publicly trained foundational models may not give out accurate output that is specific to the company’s needs and requirements.

With this partnership, the duo – NVIDIA and ServiceNow – is looking to address these challenges by building custom generative AI models for enterprises, fine-tuned to enterprise needs and requirements, focusing on domain-specific use cases.

Testing the Waters

To enable this, ServiceNow will be using NVIDIA’s software, services and accelerated infrastructure, to develop custom large language models trained on data specifically for its ServiceNow Platform.

The company said that this will expand ServiceNow’s already extensive AI functionality with new uses for generative AI across the enterprise, including IT departments, customer service, teams, employees and developers, to strengthen workflow automation and rapidly increase productivity.

At the same time, ServiceNow will also be helping NVIDIA streamline its IT operations with these generative AI tools, using NVIDIA data to customise NVIDIA NeMo foundation models running on hybrid-cloud infrastructure, consisting of NVIDIA DGX SuperPOD AI supercomputers.

Read: NVIDIA Open-Sources Guardrails for Hallucinating AI Chatbots

Jonathan Cohen, VP of Applied Research at the company explained how the guardrails could be implemented. He said while they have been working on the Guardrails system for years, a year ago we found this system would work well with OpenAI’s GPT models as well.

Enterprise-Specific Use Cases

The duo is exploring a number of generative AI use cases to enhance productivity in IT. This includes developing virtual assistants and agents to help quickly resolve a wide range of user questions and support requests with purpose-built AI chatbots that leverage large language models and focus on defined IT tasks.

In addition to this, they are also looking to simplify the user experience so that enterprises can customise chatbots with proprietary data to create a central generative AI resource that stays on the topic while resolving multiple requests, alongside improving the employee experience by helping identify growth opportunities and more.

Competition Galore

IT has become the sweet spot for generative AI companies. Looks like NVIDIA is giving tough competition to companies like Microsoft, OpenAI, IBM and Google, alongside challenging enterprise automation companies like SAP, Zoho, and others, who are looking to impress enterprises with generative AI services and offerings.

Recently, Cognizant announced the launch of Cognizant Neuro AI. This new platform provides enterprises with a comprehensive approach to accelerate the adoption of generative AI in a flexible, secure, scalable and responsible way. Prasad Sankaran, the VP of software and engineering, said that this new AI platform goes beyond PoCs, and aims to accelerate the adoption of enterprise-scale AI applications, including RoI, minimising risks, etc.

TCS is working on developing its own alternative to GitHub Copilot to revamp enterprise code generation. Capgemini, another IT player is also bringing generative AI-based solutions for its clients.

Last week, IBM announced the launch of WatsonX, the platform that enables enterprises to design and customise LLMs as per their operational and business needs. Last month, OpenAI also said that it would be launching ChatGPT Business in the coming months, promising enterprises more control over their data and teams on how they utilise the chatbot.

Meanwhile, Microsoft has stepped up its chip game to target NVIDIA as it seems like it has a newfound interest in AMD since reports suggest that Microsoft has been secretly working with the latter on its own AI processors, known as Athena, since 2019. Speculations suggest that Microsoft may be financing AMD’s AI chip development, which could be seen as a move against their current partner, NVIDIA, causing NVIDIA’s shares to decline. Microsoft has previously utilized AMD’s technology in various products, including the Azure cloud services’ AI infrastructure and the Xbox Series X and Series S consoles.

Now let’s see if NVIDIA continues to lead the game or gives in to Microsoft’s threat.

The post Microsoft’s Struggle is NVIDIA’s Strength appeared first on Analytics India Magazine.

Get an AI content generator that goes far beyond creating copy for just $50

Person accessing Scribbyo AI on their laptop
Image: StackCommerce

Generating effective content is far too time-consuming when trying to run a business, so it’s a relief that technology has advanced to the point where we can pass the chore to artificial intelligence. However, AI content generators aren’t all created equal. You want an innovative AI content generator that can transform your content production.

You want Scribbyo AI, and you should get it while a lifetime subscription to the Bronze Plan is just $49.99.

Obviously, you will get content that is highly engaging, but Scribbyo will give that to you in 37 languages, opening your content to a wider audience and spreading it worldwide. And the program goes far beyond thought-provoking copy.

Nothing drives engagement like visuals, and Scribbyo also provides stunning AI image generation to perfectly complement your content. Best of all, Scribbyo is super easy to use, with over 50 prompt templates ready-made for a variety of use cases. Or unleash your imagination and use the AI Chatbot to directly create prompts to generate content without using any templates.

Scribbyo also has an AI voice over generator, so you can have realistic human voices reading any text in 540 male and female voices with 140 languages and accents. Or you can let the AI Voice Transcription feature transcribe text from any audio or video with full punctuation and up to 99% accuracy.

Scribbyo even includes an AI code generator that lets you create premium code for your app or website. It is custom code that is optimized for security, as well as performance, in any programming language you like.

Marketing professionals, bloggers and more are well-satisfied with Scribbyo, with a customer rating of 4.8 out of 5 stars. John K, an e-commerce business owner, says:

“Scribbyo has been an amazing tool for my e-commerce business. The AI content generator has helped me create high-quality and unique product descriptions for my website, while the AI image creation feature has allowed me to generate professional-looking product images in just a few clicks. The ready-made prompt templates have also been a huge help in creating marketing campaigns. If you’re an eCommerce business owner, I highly recommend giving Scribbyo a try.”

Get a Scribbyo AI: Lifetime Subscription today while it’s available for the best-on-web price of just $49.99.

Prices and availability are subject to change.

Person using a laptop computer.

Daily Tech Insider Newsletter

Stay up to date on the latest in technology with Daily Tech Insider. We bring you news on industry-leading companies, products, and people, as well as highlighted articles, downloads, and top resources. You’ll receive primers on hot tech topics that will help you stay ahead of the game.

Delivered Weekdays Sign up today

Meta unveils its first custom AI chip

mtia-die-photo-copy

Meta on Thursday unveiled its first chip, the MTIA, which it said was optimized to run recommendation engines, and benefits from close participation with the company's PyTorch developers.

Meta Properties, owner of Facebook, WhatsApp and Instagram, on Thursday unveiled its first custom-designed computer chip tailored especially for processing artificial intelligence programs, called the Meta Training and Inference Accelerator, or "MTIA."

The chip, consisting of a mesh of blocks of circuits that operate in parallel, runs software that optimizes programs using Meta's PyTorch open-source developer framework.

Also: What is deep learning? Everything you need to know

Meta describes the chip as being tuned for one particular type of AI program: deep learning recommendation models. These are programs that can look at a pattern of activity, such as clicking on posts on a social network, and predict related, possibly relevant material to recommend to the user.

The chip is version one of what Meta refers to as a family of chips, and said it was begun in 2020. No detail was offered as to when future models of the chip will arrive.

Meta follows other giant tech companies that have developed their own chips for AI in addition to using the standard GPU chips from Nvidia that have come to dominate the field. Microsoft, Google and Amazon have all unveiled multiple custom chips over the past several years to handle different aspects of AI programs.

Also: Nvidia, Dell, and Qualcomm speed up AI results in latest benchmark tests

The Meta announcement was part of a broad presentation Thursday in which several Meta executives discussed how they are beefing up Meta's computing capabilities for artificial intelligence.

In addition to the MTIA chip, the company discussed a "next-gen data center" it is building that "will be an AI-optimized design, supporting liquid-cooled AI hardware and a high-performance AI network connecting thousands of AI chips for data center-scale AI training clusters."

Also: ChatGPT and the new AI are wreaking havoc on cybersecurity

Meta also disclosed a custom chip for encoding video, called the Meta Scalable Video Processor. The chip is designed to more efficiently compress and decompress video and encode it into multiple different formats for uploading and viewing by Facebook users. Meta said the MSVP chip "can offer a peak transcoding performance of 4K at 15fps at the highest quality configuration with 1-in, 5-out streams and can scale up to 4K at 60fps at the standard quality configuration."

Rather than rely on Nvidia GPUs, or CPUs from Intel, Meta said, "with an eye on future AI-related use cases, we believe that dedicated hardware is the best solution in terms of compute power and efficiency" for video. The company noted that people spend half their time on Facebook watching video, with over four billion video views per day.

Also: Meet the post-AI developer: More creative, more business-focused

Meta has for years hinted at its development of a chip, as when its chief AI scientist, Yann LeCun, was interviewed by ZDNET in 2019 on the matter. The company kept silent about the details of those efforts even as its peers rolled out chip after chip, and as startups such as Cerebras Systems, Graphcore and SambaNova Systems arose to challenge Nvidia with exotic chips focused on AI.

The MTIA has aspects similar to chips from the startups. At the heart of the chip, a mesh of sixty-four so-called processor elements, arranged in a grid of eight by eight, echoes many designs for AI chips that adopt what is called a "systolic array," where data can move through the elements at peak speed.

The MTIA chip is somewhat unusual in being constructed to handle both of the two main phases of artificial intelligence programs, training and inference. Training is the stage when the neural network of an AI program is first refined until it performs as expected. Inference is the actual use of the neural network to make predictions in response to user requests. Usually, the two stages have very different requirements in terms of computer processing and are handled by distinct chip designs.

Also: This new technology could blow away GPT-4 and everything like it

The MTIA chip, said Meta, can be up to three times more efficient than GPUs in terms of the number of floating-point operations per second for every watt of energy expended. However, when the chip is tasked with more complex neural networks, it lags GPUs, Meta said, indicating more work is needed on future versions of the chip to handle complex tasks.

Meta's presentation by its engineers Thursday emphasized how MTIA benefits from hardware-software "co-design," where the hardware engineers exchange ideas in a constant dialogue with the company's PyTorch developers.

In addition to writing code to run on the chip in PyTorch or C++, developers can write in a dedicated language developed for the chip called KNYFE. The KNYFE language "takes a short, high-level description of an ML operator as input and generates optimized, low-level C++ kernel code that is the implementation of this operator for MTIA," Meta said.

Also: Nvidia says it can prevent chatbots from hallucinating

Meta discussed how it integrated multiple MTIA chips into server computers based on the Open Compute Project that Meta helped pioneer.

More details on the MTIA are provided in a blog post by Meta.

Meta's engineers will present a paper on the chip at the International Symposium on Computer Architecture conference in Orlando, Florida, in June, titled, "MTIA: First Generation Silicon Targeting Meta's Recommendation System."

Artificial Intelligence