IBM intros a slew of new AI services, including generative models

IBM intros a slew of new AI services, including generative models Kyle Wiggers 9 hours

IBM, like pretty much every tech giant these days, is betting big on AI.

At its annual Think conference, the company announced IBM Watsonx, a new platform that delivers tools to build AI models and provide access to pretrained models for generating computer code, text and more.

It’s a bit of a slap in the face to IBM’s back-office managers, who just recently were told that the company will pause hiring for roles it thinks could be replaced by AI in the coming years.

But IBM says the launch was motivated by the challenges many businesses still experience in deploying AI within the workplace. Thirty percent of business leaders responding to an IBM survey cite trust and transparency issues as barriers holding them back from adopting AI, while 42% cite privacy concerns — specifically around generative AI.

“AI may not replace managers, but the managers that use AI will replace the managers that do not,” Rob Thomas, chief commercial officer at IBM, said in a roundtable with reporters. “It really does change how people work.”

Watsonx solves this, IBM asserts, by giving customers access to the toolset, infrastructure and consulting resources they need to create their own AI models or fine-tune and adapt available AI models on their own data. Using Watsonx.ai, which IBM describes in fluffy marketing language as an “enterprise studio for AI builders,” users can also validate and deploy models as well as monitor models post-deployment, ostensibly consolidating their various workflows.

But wait, you might say, don’t rivals like Google, Amazon and Microsoft already provide this or something fairly close to it? The short answer is yes. Amazon’s comparable product is SageMaker Studio, while Google’s is Vertex AI. On the Azure side, there’s Azure AI Platform.

IBM makes the case, however, that Watsonx is the only AI tooling platform in the market that provides a range of pretrained, developed-for-the-enterprise models and “cost-effective infrastructure.”

“You still need a very large organization and team to be able to bring [AI] innovation in a way that enterprises can consume,” Dario Gil, SVP at IBM, told reporters during the roundtable. “And that is a key element of the horizontal capability that IBM is bringing to the table.”

That remains to be seen. In any case, IBM is offering seven pretrained models to businesses using Watsonx.ai, a few of which are open source. It’s also partnering with Hugging Face, the AI startup, to include thousands of Hugging Face–developed models, datasets and libraries. (For its part, IBM is pledging to contribute open source AI dev software to Hugging Face and make several of its in-house models accessible from Hugging Face’s AI development platform.)

The three that the company is highlighting at Think are fm.model.code, which generates code; fm.model.NLP, a collection of large language models; and fm.model.geospatial, a model built on climate and remote sensing data from NASA. (Awkward naming scheme? You betcha.)

Similar to code-generating models like GitHub’s Copilot, fm.model.code lets a user give a command in natural language and then builds the corresponding coding workflow. Fm.model.NLP comprises text-generating models for specific and industry-relevant domains, like organic chemistry. And fm.model.geospatial makes predictions to help plan for changes in natural disaster patterns, biodiversity and land use, in addition to other geophysical processes.

These might not sound novel on their face. But IBM claims that the models are differentiated by a training dataset containing “multiple types of business data, including code, time-series data, tabular data and geospatial data and IT events data.” We’ll have to take its word for it.

“We allow an enterprise to use their own code to adapt [these] models to how they want to run their playbooks and their code,” Arvind Krishna, the CEO of IBM, said in the roundtable. “It’s for use cases where people want to have their own private instance, whether on a public cloud or on their own premises.”

IBM is using the models itself, it says, across its suite of software products and services. For example, fm.model.code powers Watson Code Assistant, IBM’s answer to Copilot, which allows developers to generate code using plain English prompts across programs including Red Hat’s Ansible. As for fm.model.NLP, those models have been integrated with AIOps Insights, Watson Assistant and Watson Orchestrate — IBM’s AIOps toolkit, smart assistant and workflow automation tech, respectively — to provide greater visibility into performance across IT environments, resolve IT incidents in a more expedient way and improve customer service experiences — or so IBM promises.

FM.model.geospatial, meanwhile, underpins IBM’s EIS Builder Edition, a product that lets organizations create solutions addressing environmental risks.

Alongside Watsonx.ai, under the same Watsonx brand umbrella, IBM unveiled Watsonx.data, a “fit-for-purpose” data store designed for both governed data and AI workloads. Watsonx.data allows users to access data through a single point of entry while applying query engines, IBM says, plus governance, automation and integrations with an organization’s existing databases and tools.

Complementing Watsonx.ai and Watsonx.data is Watsonx.governance, a toolkit that — in IBM’s rather vague words — provides mechanisms to protect customer privacy, detect model bias and drift, and help organizations meet ethics standards.

New tools and infrastructure

In an announcement related to Watsonx, IBM showcased a new GPU offering in the IBM cloud optimized for compute-intensive workloads — specifically training and serving AI models.

The company also showed off the IBM Cloud Carbon Calculator, an “AI-informed” dashboard that enables customers to measure, track, manage and help report carbon emissions generated through their cloud usage. IBM says it was developed in collaboration with Intel, based on tech from IBM’s research division, and can help visualize greenhouse gas emissions across workloads down to the cloud service level.

It could be said that both products, in addition to the new Watsonx suite, represent something of a doubling down on AI for IBM. The company recently built an AI-optimized supercomputer, known as Vela, in the cloud. And it has announced collaborations with companies such as Moderna and SAP Hana to investigate ways to apply generative AI at scale.

The company expects AI could add $16 trillion to the global economy by 2030 and that 30% of back-office tasks will by automated within the next five years.

“When I think of classic back-office processes, not just customer care — whether it’s doing procurement, whether it’s elements of supply chain [management], whether it’s elements of IT operations, or elements of cybersecurity … we see AI easily taking anywhere from 30% to 50% of that volume of tasks, and being able to do them with much better proficiency than even people can do them,” Gil said.

Those might be optimistic (or pessimistic, if you’re humanist-leaning) predictions, but Wall Street has historically rewarded the outlook. IBM’s automation solutions — part of the company’s software segment — grew revenue by 9% year over year in Q4 2022. Meanwhile, revenue from data and AI solutions, which focuses more on analytics, customer care and supply chain management, grew sales by 8%.

But as a piece in Seeking Alpha notes, there’s reason to lower expectations. IBM has a difficult history with AI, having been forced to sell its Watson Health division at a substantial loss after technical problems led high-profile customer partnerships to deteriorate. And rivalry in the AI space is intensifying; IBM faces competition not only from tech giants like Microsoft and Google but also from startups like Cohere and Anthropic that have massive capital backing.

Will IBM’s new apps, tools and services make a dent? IBM’s hoping so. But we’ll have to wait and see.

This Mysterious Man Threatens OpenAI’s Sam Altman with a Lawsuit

The Curious Case of Sam Altman vs Anthony Trupia

Sam Altman and OpenAI are in trouble. A lawsuit has been filed by Anthony Trupia against Altman and several stakeholders of OpenAI, including Reid Hoffman, Greg Brockman, Ilya Sutskever, Andrej Karpathy, and investors like Microsoft Corporation, Sequoia Capital, and Bedrock Capital Partners, among others.

Plaintiff Trupia has alleged that OpenAI has been running a “Non-Profit” entity — supposedly for ‘the benefit of all humanity’ — has perpetrated a massive fraud on donors, beneficiaries, and the public at large, and has exposed ‘all of humanity’ to massive unprecedented risks for personal gain. It further states that OpenAI has used this technology for the benefit of its beneficiaries and in the violation of the federal law.

Anthony Trupia, who goes by the username The Short Straw on Twitter is the person to file the lawsuit. In one of the replies to his tweet, a twitter user wrote that this is because he did not get into YC. We reached out to Trupia for comments, but he did not respond.

In a Twitter thread, Trupia explains in detail the importance and the basis of his lawsuit. He explains how the corporation’s structures are haphazard and are constantly passed back and forth and used freely, citing examples of renaming of YC Research to OpenResearch. “This is called commingling,” he adds — mixing personal and business finances.

When a corporation is formed, it has a separate legal identity from its owners. But according to Trupia, this is not the case with OpenAI.

“Commingling is generally illegal whether you are for-profit or non-profit. All the resources of the non-profit arm of OpenAI are commingled with resources of the for-profit arm. Same board members, same technology assets, and again, even the same name,” explained Trupia. This is also confirmed in a blog post by OpenAI.

Furthermore, Trupia claims that the YC network came together to pool their money under the banner of a non-profit OpenAI for avoiding copyright claims, paying taxes, and creating “the most powerful supercomputer.”

Trupia’s concern is that the company is profiting the YC partners but not “all of humanity”, which actually is touted as the purpose of OpenAI’s foundation.

Does the lawsuit have any basis?

This lawsuit against OpenAI is based on the fact that the company’s non-profit arm, which claims “all of humanity” as its beneficiaries, has been mostly benefiting the “incredibly narrow” board members and the network of Y Combinator, the tech accelerator. The key highlight here is that Altman was also the president of Y Combinator before being the CEO of OpenAI.

This means that even though Altman is not gaining any profits from OpenAI, he is still making money from Y Combinator, along with all the other beneficiaries. In a recent tweet, Altman also mentioned how being an investor is easy, high-status, and offers great money.

being a VC is so easy, so high-status, the money is so great, and the lifestyle is so fun.
very dangerous trap that many super talented builders never escape from, until they eventually look back on it all and say “damn i’m so unfulfilled”.

— Sam Altman (@sama) May 7, 2023

Altman is still the chairman of the YC’s accelerator program even after stepping down as the president of the investment fund. “The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission, which allows us to raise investment capital and attract employees with startup-like equity,” reads the OpenAI blog. This means that Altman still receives profit from YC.

A lot of Twitter users agree with the lawsuit. But, on the contrary, many are sceptical if it holds any ground or makes any sense at all. Lewis from SpellCraftAI asks in the thread, “At exactly what point in time did OpenAI executives assume any fiduciary duty to you at all?”

Another user points out that this is the normal working of all non-profit entities. The fact that OpenAI is not benefiting “all of humanity” as they claimed in their mission might be merely a misalignment, and not an actual fraud done against the world.

Possibly, the six-month pause on giant AI experiments, which was mostly against OpenAI, was actually a good idea. Regardless of the merit of the lawsuit, the conversation around the ethical business practises around these ethically questionable AI products is actually pretty important. The lawsuit definitely brings that to the forefront.

After Midjourney and Stability AI getting into legal trouble because of copyright infringements, it is now OpenAI which is landing into legal troubles.

Sam’s Altruism isn’t That True

Altman has been concerned about the ethical implications of OpenAI’s models like ChatGPT just like many of the masses are. But he also ensures that as long as the technology is under OpenAI’s control, it will remain safe. Now his altruistic approach towards AI seems a bit fishy.

Most importantly, this lawsuit brings up the question about the ethical practices at OpenAI and if they are actually building AI that is just for the benefit of all humanity, or just for its beneficiaries. It might possibly not be illegal, but is it ethical?

Moreover, the people who are researching AI safety for billionaires cannot be trusted completely over their intentions and ethics. The probability the AI doomers are true about the evilness within this model might actually be linked to the massive AI race between the big-tech.

OpenAI’s intentions with AI have never been quite clear anyway. Earlier, the company had decided to trademark on ‘GPT’, restricting anyone’s name in their products. Maybe this time, it is actually the beginning of the end of OpenAI. Or it might just be a lawsuit that does not affect the company at all.

Who knew that the technology that Sam Altman led OpenAI has been developing all this while might bring them to legal trouble. Though it has been in the talks about the use of copyright material for training its model, it never actually landed the company in trouble, except getting banned in several countries. But now the case is different, and potentially threatening for the company.

The shift of OpenAI from being a non-profit organisation to a for-profit company in 2019 has been criticised by Elon Musk for a long time. Moreover, he had also mentioned in a tweet that “it’s lawsuit time” against OpenAI for using Twitter’s data for training ChatGPT.

Earlier, OpenAI had recently received a defamation lawsuit from Brian Hood, Australian regional mayor, because ChatGPT was spreading false information that he had gone to prison for bribery.

The post This Mysterious Man Threatens OpenAI’s Sam Altman with a Lawsuit appeared first on Analytics India Magazine.

Build a ChatGPT-like Chatbot with These Courses

Build a ChatGPT-like Chatbot with These Courses
Image by firmufilms on Freepik

New technological advancement is always attention-grabbing. Data science and its applications have been the front-runner of attention for years. 2023 started strong with OpenAI releasing an AI-based chatbot, ChatGPT. The release of ChatGPT has been thunderous, with everyone using it in different ways, challenging the limits of AI and the chatbot itself. It even triggered other companies like Notion to build better chatbots to challenge and overcome ChatBGT.

Not just companies but also many data scientists, both novices and experts, played with the idea of building their own version of ChatGPT to gain more knowledge and experience in building AI-based chatbots or to challenge and grow their skills. You reading this article means you also thought about making a ChatGPT-like chatbot or are just curious about what it takes to build such a tool.

This article will review the knowledge you need to know to build your version of ChatGPT. But before we jump into the technical knowledge we need to develop a chatbot, let's talk briefly about what it takes to build a chatbot.

Since we are considering building a ChatBGT-like chatbot, which is a web-based chatbot, we will need to consider two parts when designing and building the chatbot. The front end (how the chatbot looks), what the user will interact with, and the core of the chatbot (the backend), or what we will call the brains of the chatbot.

Let's dive into some courses that will give you the knowledge you need to build an AI-based chatbot that looks good and functions well. Then, we suggest a course for each element to get you all the tools you need to build your ChatGPT.

The Looks

First, we will start with what it takes to build the looks of the chatbot; the more initiative your chatbot looks, the better the user experience will be. So, what do you know to make a good interface for your chatbot?

UI Design

There are two aspects of how a webpage looks: general aesthetics and the design's intuitive design. The feeling of the webpage (the chatbot in our case) is the UI (User Interface) design.

When you build a chatbot, knowing the fundamental principles of user interface design is essential. This course offered by CalArts will give you an understanding of the fundamentals of UI design.

UX Design

A beautiful design with color is good, but if it's challenging to navigate, then how it looks will matter little. Here is where knowing the basics of UX (User experience) comes in handy. UX is the art of designing applications that are easy to navigate and use, hence providing a better experience for anyone using that app. For example, if we want to build a good chatbot, it must look good and be easy and intuitive. CalArts also offers a course to help you gain the knowledge you need to make a chatbot with good UX.

HTML & CSS

Since we are trying to build a web-based, we need to know how to build web applications. That means we need to know some HTML and CSS. Of course, today, we can use many services to help develop a webpage without writing HTML or CSS.

But knowing them will give you more control of what you're building and the details of it. This course from CodeAcademy will help you learn the basics of HTML and CSS. Or you can check out this guided project from Coursera that you can get done in under 2 hours.

Build a ChatGPT-like Chatbot with These Courses
Image by Author
The Brains

Now that we designed how the chatbot looks, let's get into building its brains. We want to build an AI-based chatbot, so we must master the basics of data science, programming, and AI. We can divide the brains of ChatGPT into two sections, the basics of data science and the core of chatbots. Now let’s look at each of those in a bit of detail.

The Fundamentals of Data Science

Programming and Math

Data science and all its applications are based on some math knowledge (probability theory and linear algebra) and programming. However, if you already know the basics of data science, you can skip this step and move to the core of building the heart of the chatbots section.

If ChatGPT got you curious about starting your journey in data science applications, this course by Harvard University would help you get your foot in by providing the math and programming knowledge you need to begin building chatbots!

Machine Learning

Once you're comfortable writing code and know some math, we can now move on to one of the fundamental building blocks of any data science application, machine learning. Machine learning is a collection of algorithms and techniques used to make computers smarter. You can learn the basics of machine learning using this course from Stanford University.

The Core of Chatbots

Chatbots are a category of data science, namely, natural language processing, that aims to create a system for the user to converse with. When can categorize chatbots based on their main functionality into three categories:

  1. Simple NLP ChatBots.
  2. Implications-based ChatBots
  3. Intelligence-based ChatBots

The first type is a basic chatbot with a simple conversation with the user; the second type is often used to deal with the users' problems. Those are usually the support bots on most websites. Finally, the third type simulates and predicts how the user may interact with the UI. Looking closely at ChatGPT, we will notice it's a mix of those three types. To build an AI-based chatbot, we need to know the basics of natural language processing (NLP), AI, and the fundamentals of building a chatbot.

Build a ChatGPT-like Chatbot with These Courses
Image by macrovector on Freepik

Natural Language Processing

This Udemy course will get you comfortable with NLP, what it means, its fundamentals, and its various applications, including chatbots.

Chatbot Basics

Covering the basics of NLP is the first step to building a chatbot. Once you know the basics, we can get into more details on how to design and build chatbots, mainly using this course.

Artificial Intelligence

For your chatbot to feel realistic and have engaging conversations with the user, the chatbot needs to be intelligent or resemble human intelligence. To do that, we will use AI. Hence we need to learn how to apply AI techniques to our chatbot. This course from DeepLearning.AI covers the basics of AI and how to use it to build chatbots.

Conclusion

ChatGPT has been the media's focus recently for an excellent reason. It is clear evidence of how powerful technology can be. It proves we can design great tools that make our lives easier and challenge us to be and do better simultaneously.

ChatGPT triggered the curiosity of so many people, both those in tech and outside, to know how such a tool can be built. Though some may feel like developing such a tool must be complex, the core of building a chatbot is more straightforward than it seems.

This article discussed what you need to know to build a ChatBGT-like chatbot. So, next time you have a free weekend, you can try building a chatbot; maybe the chatbot result will be the one that competes with ChatGPT!

Sara Metwalli is a Ph.D. candidate at Keio University researching ways to test and debug quantum circuits. I am an IBM research intern and Qiskit advocate helping build a more quantum future. I am also a writer on Medium, Built-in, She Can Code, and KDN writing articles about programming, data science, and tech topics. I am also a lead in the Woman Who Code Python international chapter, a train enthusiast, a traveler, and a photography lover.

More On This Topic

  • The Chatbot Transformation: From Failure to the Future
  • Facebook Open Sources a Chatbot That Can Discuss Any Topic
  • Visual ChatGPT: Microsoft Combine ChatGPT and VFMs
  • Open Assistant: Explore the Possibilities of Open and Collaborative Chatbot…
  • How to build a DAG Factory on Airflow
  • Build Your Own AutoML Using PyCaret 2.0

7 Futuristic Use-Cases for AI Automation in Data Engineering

AI entering the field of data science is not new news. An increasing number of tools that employ AI are being deployed in analytics and data engineering, and we are observing a shift in data scientist roles. With AI automation coming into the picture, it will be interesting to see how data engineering will pan out.

From simplifying data to improving quality, here are some future use cases where AI automation can play a significant role in data engineering.

Big Query Management

By 2025, the data generated each day is expected to reach 463 exabytes globally. With this level of vast data comes the challenge of effective data management. The first step of sorting and querying is where you face the bottleneck. Integrating AI into databases can help improve efficiency. Automated query management, prioritising queries, and minimising manual database monitoring are some of the improvements that can boost efficiency.

Managing Data Quality

According to a research report by Gartner, poor data quality costs organizations an annual estimate of $12.9 million. From data integration issues to data duplication, there are multiple reasons that contribute to poor quality data. Not only does this result in financial consequences, but it also increases the difficulty in data ecosystems and can even lead to faulty decision-making. Mitch N., Founder and Managing Partner of bringga, believes that an automated AI-enabled data evaluation model can help perform root-cause analysis and identify data quality issues.

Master Data Management

In continuation of the above point, master data can be better managed through intelligent match-merge algorithms that can be AI-powered. It can reduce uncertainties by accurately matching and merging data, thereby improving the overall quality of data. The manual matching and merging of datasets, which are error-prone, are avoided. With intelligent deduplication, further refinement of data is achieved.

Intelligent Search Capability

Intelligent search is a type of search that incorporates technologies such as natural language processing and machine learning to interpret a user’s query. By analysing a user’s query, the search engine can figure out what type of information the user is looking for and return accurate and relevant search results. In data science, such a tool can help users get quicker output. A user’s question can be converted into a SQL statement and sent to the connected data store to retrieve appropriate results.

Automated Mapping of Metadata

Metadata helps manage and use data effectively by classifying and organising data. It describes various aspects of the data such as structure, format, quality, etc., thereby providing a way to classify and organise data to ensure that it is used appropriately. With the implementation of AI, automatic metadata tagging can be implemented, which not only saves time but eliminates errors. With automated mapping of business and technical metadata, the relationship between different data elements can be better understood, which translates to better usage of data within the organisation.

Code Generation

The future of data science will increasingly shift towards an automated way of code generation. There are already tools that help with code generation, and such tools will see increasing adoption. For instance, ProbeAI, known as the ‘AI Copilot for Data Analysts,’ performs tasks such as auto-generation of complex SQL codes, optimising and fixing SQL codes too. You even have the integration of chatbots to guide with code generation – CopilotX.

Pipeline Management

Data pipeline, an integral part of modern data management and analytics systems, where data is collected, transformed and stored in data lakes or data warehouses, is prone to multiple bottlenecks owing to the nature of the data. With AI automation, performance and efficiency of data pipelines can be improved. Workload optimisation recommendations can help with better resource allocation and optimise data processing pipelines. Workload monitoring and predictions is another area where AI can support. Through intelligent workload analysis, predictive modelling, and anomaly detection, any errors can be immediately addressed and performance can be improved.

The post 7 Futuristic Use-Cases for AI Automation in Data Engineering appeared first on Analytics India Magazine.

This Could Mark the End of Amazon Kindle As We Know It 

Writing a book is a complex task which requires months and years of literary labour and moments of genius, but text generating AI tools like OpenAI’s ChatGPT has made it easier for wannabe authors to whip out books in a matter of hours.

According to several reports, Amazon’s self-publishing platform, Kindle Direct Publishing, is increasingly flooded with books generated by AI. This has become a matter of concern for authors and people from the literary eco system who are concerned about the quality and originality of these books.

There were over 200 e-books in Amazon’s Kindle store listing ChatGPT as an author or co-author, as of mid-February, with the number rising daily, according to Reuters. There are books ranging from fantasy fiction to self-help and non-fiction with Titles such as ‘ChatGPT smarter than humans?’, ‘Make more money with ChatGPT.

However, many authors feel no duty to disclose in the Kindle store that their book was written entirely by a computer, in part because Amazon’s policies do not require it, making it nearly impossible to get a full account of how many e-books may be written by AI.

Amazon’s Kindle Direct Publishing which launched in 2007, makes it easy for authors to self-publish and market their books without the need for literary agents or publishers. Amazon’s entry into the book market in 1995 brought significant changes to the publishing industry, including the explosion of self-publishing and the decline of physical bookstores. But the advent of Generative AI chatbots could very well cost the giant its reputation with the ardent readers, who might ditch it for actual books by real authors.

You might say that Amazon’s journey into ecommerce started with books, but AI generated books might be the end of kindle or its monopoly, unless its regulated or at the very least recognised as a different category altogether.

Experts like Mary Rasenberger, executive director of writers’ group the Authors Guild, also warns that the influx of books created using AI could flood the market and result in a lot of low-quality books. There needs to be transparency from the authors and the platforms about how these books are created or you’re going to end up with a lot of low-quality books, she opines.

Although Amazon’s quarter four results indicated marginal increase in earning from subscription based services including Kindle.

Amazon’s Kindle Dwindles

Amazon is a major force in self-publishing, releasing over 1.4 million self-published books through its Kindle Direct Publishing each year, with self-published books accounting for 31% of Amazon’s ebook sales.

Amazon’s KDP has spawned a cottage industry of self-published novelists, carving out particular niches for enthusiasts of erotic content and self-help books. Authors can publish their works instantly on the platform without any oversight,

This has attracted new authors, such as Kamil Banc, who works primarily in online fragrance sales. Banc challenged himself to write and publish a book in less than a day, using AI tools like ChatGPT to generate prompts and create illustrations. The result was a 27-page illustrated book called “Bedtime Stories: Short and Sweet, For a Good Night’s Sleep,” which Banc published on Amazon. He said it took him about four hours to create.

Experts suggest that such AI-written content is the tip of a fast-growing iceberg. As new language software allows anyone to rapidly generate reams of prose on almost any topic, human authorship of online material is on track to become the exception rather than the norm. This could lead to more hyper-specific and personalised articles, but also more misinformation and manipulation about politics, products, and more.

Critics also worry that the technology will upend the staid book industry as would-be novelists and self-help gurus looking to make a quick buck are turning to the software to help create bot-made e-books and publish them through Amazon’s Kindle Direct Publishing arm. Ghostwriting has a long tradition, but the ability to automate through AI could turn book writing from a craft into a commodity, they express.

Reuters also noted that some of the books were found to be plagiarised, while others contained errors and inconsistencies, suggesting that they were not properly edited or reviewed by human editors. Critics argue that the ease with which AI-generated books can be published on Amazon is diluting the value of books and potentially deceiving readers who may assume that they are purchasing a book written by a human author. They also highlight the importance of quality control and editorial oversight in ensuring that readers are not deceived or disappointed by the content they purchase.

AIM Findings

AIM noticed that a peculiar author on Amazon who goes by, ‘Joseph Floyd’, has a Bachelor’s degree in Computer Science from the University of Texas at Austin and is a visionary entrepreneur and AI enthusiast has authored 2 books titled “The ChatGPT-4 Millionaire” and “CHAT GPT BOOK FOR BEGINNERS: Getting Started with ChatGPT-4, Make Money Online with AI and Earn Passive Income Now”. While “CHAT GPT BOOK FOR BEGINNERS” is ranked 50,377 in Kindle Store, “The ChatGPT-4 Millionaire” is on 343,458 in Kindle Store. Not surprisingly, both the books also have five star ratings by 340 and 98 users respectively. We started getting suspicious when we looked Joseph up online, and found literally nothing — no trace.

In another instance, Chris Cowell, a software developer based in Portland, Oregon, wrote a book called “Automating DevOps with GitLab CI/CD Pipelines.” Just a few weeks before it was released, another book with the same title, on the same subject, appeared on Amazon, reported The Washington Post. The book was written by someone named Marie Karpos, whom Cowell had never heard of before. The publisher of Karpos’s book, a Mumbai based education technology firm called inKstall, lists dozens of books on similar technical topics, each with a different author, an unusual set of disclaimers, and matching five-star reviews from the same handful of India-based reviewers.

Will You Read A AI Written Book?

As AI writes more and more of what we read, vast, unvetted pools of online data may not be grounded in reality. According to Margaret Mitchell, chief ethics scientist at Hugging Face, “The main issue is losing track of what truth is… Without grounding, the system can make stuff up. And if it’s that same made-up thing all over the world, how do you trace it back to what reality is?”

The use of AI chatbots, also raises questions whether AI-generated content can be copyrighted and who owns the content it generates. According to the Copyright, Designs and Patents Act 1988, computer-generated works can be copyrighted if they are “generated by computer in circumstances such that there is no human author of the work.” However, determining the original sources of answers generated by AI chatbots can be difficult, and they may include copyrighted works.

The issue is further complicated by the fact that it is currently legal for AI developers to pursue text and data mining (TDM) for non-commercial purposes, but it is up to users to ensure that their use of content does not violate any laws.

In India, the Parliamentary Standing Committee recommended reviewing the Copyright Act to expand the scope of authorship in AI-generated works. In summary, a comprehensive reform is necessary before AI can be granted copyright ownership, covering all aspects and appreciating the spirit of copyright law.

Conclusively, the emergence of ChatGPT has caused a paradigm shift in the book industry, enabling individuals to create books in just a few hours with the help of AI. But it takes away from the joy of reading a well curated masterpiece which has been pieced together with careful rumination.

The post This Could Mark the End of Amazon Kindle As We Know It appeared first on Analytics India Magazine.

India IT Embraces AI and You Should Feel Dead Scared 

TCS is attempting to build its own Github Copilot alternative, which is touted to be used for enterprise code generation, the company COO N Ganapathy Subramaniam said in a recent interaction with Economics Times.

As per him, while the project is at an initial stage, the company is looking to harness the vast internal code, data and resources TCS already has access to. The solution will be built through in-house LLM algorithms. It appears that this development is likely to be akin to the GitHub Copilot, but the model will be trained on its own data.

“The focus is really to build models and using those models, generate the code that can be deployed,” he said. He said if TCS wants to provide secure and contextual generative AI solutions to accelerate client projects, it cannot depend only on multiple large language tools from different providers but should have its own solution.

The announcement comes weeks after Analytics India Magazine reported that Infosys is planning to embrace generative AI along with the revised training of freshers to meet the growing demand for AI professionals in the IT sector.

Could it be possible that TCS is looking to replace its employees with AI, given the recent announcement about its development of a GitHub Copilot alternative for enterprise code generation? There’s some indication in that respect. Last year, the IT giant witnessed a total reduction of 2,197 headcounts while its attrition rate stood at 21.3%.

While Ankur Kothari, Co-founder of Automation Anywhere believes that “Automation allows companies to do more with less, enabling them to leverage new opportunities and create new roles that were not possible before,” the emphasis on ‘do more with less,’ will always be there.

Where west is headed

While the boardroom of Indian IT is still discussing the possibility and potential of generative AI in their sector, the Western counterparts have already started adopting the GAI (generative AI). Last week, IBM shocked the world with the announcement that the company would be ‘replacing the workers’ with AI.

As per Arvind Krishna, CEO of IBM, the hiring in back-office functions, such as human resources, will be suspended or slowed. These non-customer-facing roles amount to roughly 26,000 workers, Krishna said. “I could easily see 30% (7,800) of that getting replaced by AI and automation over a five-year period.”

Recently, IBM’s Red Hat also announced a series of layoffs, with a significant number of individuals from the HR and management departments being affected. This sector is particularly susceptible to replacement by AI, making the news all the more noteworthy.

Additionally, part of any reduction would include not replacing roles vacated by attrition, as per an IBM spokesperson.

In a similar vein, following the elimination of approximately 27,000 positions, Amazon’s CEO also acknowledged that the company will experience a more restrained approach to hiring in certain areas. “As our internal teams assess the priorities of our customers, they have made strategic decisions that may have resulted in downsizing, relocating personnel to new initiatives, or even creating new positions when the requisite skills are not currently possessed by our existing staff,” he had said.

West Admits it to be True

One trend that we can quickly observe is that all major tech companies are openly discussing their plans to reduce hiring while adopting AI technology. However, Indian IT firms are less forthcoming on the matter. For example, Infosys puts out statements such as “coding is much more structured than natural language, leading to more opportunities in data engineering and pipeline creation,” and “AI-driven solutions can help clients streamline processes and cut costs by as much as 60% to 70%.” However, the company does not mention anything about the potential job reductions resulting from these changes.

Similarly, while it is widely accepted that employees in HR & management are likely to be the first ones to fall, with many companies reducing their roles, Zoho told AIM that they don’t see AI replacing roles anytime in the near future. As per them, AI will enhance the productivity of employees and help them be more productive at work by letting them focus on problems that require human intervention rather than redundant and mundane tasks.

Looking at the situation from far behind, it seems like that when it comes to job losses, Indian IT is trying to bury its head in the sand, while also trying to ride the wind by putting out statements here and there on Generative AI. IT analyst and CEO of EIIRTrend, Pareekh Jain, also commented on the trend, stating that this year, Generative AI is the buzzword. He doesn’t believe anything impactful will come out of it in a couple of years.

The post India IT Embraces AI and You Should Feel Dead Scared appeared first on Analytics India Magazine.

After Google, Microsoft Targets Nvidia

In response to a question about Microsoft’s plans to compete with Google’s search business, CEO Satya Nadella indicated that he only intends to make Google ‘dance’. Looks like Microsoft sees itself in AMD, and the duo is set out to break another monopoly: that of Nvidia.

Just a few days ago, reports emerged that Microsoft has been working on its own in-house AI processors, codenamed Athena, since 2019 in a bid to challenge Google and AWS, who have their own set of in-house chips for training and inference. To top that, new reports allege that this project, which Microsoft has been keeping under the shadows for so long, is actually being done in partnership with the chip company, AMD.

A part of Athena or not, speculations are ripe that Microsoft is financing AMD’s AI chip push. It is surprising, especially since Microsoft has a strong partnership with Nvidia, which helps to train OpenAI’s large language models on Azure. It almost seems like AMD and Microsoft have planned a coup against Nvidia. No wonder Nvidia’s shares moved lower following the report.

It will not be the first time we’ve seen Microsoft use AMD. AMD’s silicon already powers the secure AI infrastructure in Azure cloud services, while also playing a role in the Xbox Series X and Series S consoles.

Market positioning

Currently, Nvidia enjoys a monopoly in the GPU market, so the industry is seeking an alternative. At least in the data centre segment, it feels like Intel and AMD are fighting over scraps, while Nvidia continues to dominate sales. It’s possible that Microsoft doesn’t want to spend so much money into Nvidia and is looking for an alternative that can handle their workload.

“They [Microsoft] were counting on Intel, but Intel is still not able to deliver. AMD, on the other hand, has got a very good GPU technology, but they fell back in software optimisation,” says semiconductor analyst Sravan Kundojjala. AMD will be releasing the Instinct MI300 later this year which will be their first shot at building a true data centre/HPC-class APU, combining the best of AMD’s CPU and GPU technologies.

To catch up to Nvidia, AMD developed a software framework called ROCm™, an open software platform designed to provide HPC and AI communities with access to open compute languages, compilers, libraries, and tools. However, it is not yet mature enough. Kundojjala explains that Nvidia’s CUDA works with all kinds of industries, but AMD doesn’t have that luxury. In the data center, AMD is trying to focus its attention on certain verticals – certain customers, certain workloads – so it can position itself. One of those customers is Microsoft.

Nvidia alone cannot serve every use case or every customer. Kundojjala says, “it’s a rising tide, lifts all boats kind of situation.” So, if AMD wins, Nvidia doesn’t have to lose. Because the market is so big, both can grow. At the same time, he also argues that Microsoft’s AMD push will not have much to do with the pricing that Nvidia offers for its GPUs.

“Unlike consumer applications, data centre customers are not sensitive to price. Enterprise cloud market doesn’t care about pricing. What they need is that the product has to be good, the roadmap has to be good, and the maturity of the software framework that they provide along with the hardware product – that has to be good,” he added.

Generative AI gold rush

In the recent earnings call, AMD CEO Lisa Su suggested that there are opportunities beyond hyperscalers and game consoles for AMD’s IP, such as semi-custom opportunities with higher volume potential.

AMD has already had success with semi-custom gaming consoles such as the Sony PS and Microsoft Xbox, and it is attempting to replicate this semi-custom business model in data centers with hyperscalers like Microsoft. “In the AI space, I am pretty sure that Microsoft can help AMD accelerate its software,” said Kundojjala.

Kundojjala mentions that AMD’s data centre GPU revenue is a few hundred million dollars, which is negligible compared to Nvidia’s data centre GPU chip revenue of almost $15 to $16 billion. The reason AMD has come so late to the party is that it never really focused on it. ‘Their plate was already full; they were already eating Intel’s lunch,’ he said.

AMD is still reporting solid profits, despite a PC inventory correction and a slowdown in data center sales, while Intel is reporting losses and investing billions in new fabs. Additionally, AMD has recently revealed its new Ryzen 7040U series processors for laptops, making bold claims that the chips not only beat the competition from Intel but also outpace the MacBook M2. Overall, they have been making significant strides in that direction.

Now, AMD realises that generative AI is a crucial use case in data centres. However, there will not be any dramatic change since their MI300 product will ramp up until 2024. So, it will take at least another three years for AMD to make its mark in data center GPUs.

The post After Google, Microsoft Targets Nvidia appeared first on Analytics India Magazine.

OpenAI Secretly Unveils GPT-4-32K API

OpenAI Releases ChatGPT Plugins, An ‘iOS App Store’ Moment in AI

After the highly anticipated release of GPT-4, OpenAI has released GPT-4-32k API, as confirmed by several developers who have signed up for the waitlist. This means that GPT-4 can now process 32k tokens, generating better results.

GPT-4-32K is very powerful and you can build your entire application using it.

OpenAI released APIs for its existing models like gpt-3.5-turbo, whisper-1 and so on.

In early March, OpenAI, released plugins in ChatGPT plugins, allowing ChatGPT to access various services through API calls, increasing its functionality. Besides Wolfram Research, some of the initial plugins that have been created by companies, in partnership with OpenAI include Expedia, Instacart, FiscalNote, KAYAK, Klarna, Milo, Shopify, OpenTable, Slack, Speak, Zapier and others. Since then, several AI enthusiasts have used the plugins to create their own apps.

However, this move has raised concerns about the potential for disasters to happen due to the lack of regulation and the possibility of risky plugins being created. OpenAI’s focus on safety has led to only 13 curated plugins being released, but the potential for self-regulation to work seems unlikely. With GPT-4’s ability to become “agentic,” the amplification of this nature through access to web APIs is also a concern. The launch of ChatGPT plugins shows that we need more regulation on AI to prevent disasters from happening.

Although OpenAI has not yet made any official statement, let’s lookout for updates.

The post OpenAI Secretly Unveils GPT-4-32K API appeared first on Analytics India Magazine.

12 Resources to Master Prompt Engineering

In an era when AI threatens to wipe out jobs, prompt engineering is an essential skill to stay relevant. Goldman Sachs recently published a report which suggests that approximately 18% of jobs around the world may be automated by generative artificial intelligence (AI), potentially affecting up to 300 million jobs.

So in a scenario like this, prompt engineering is a skill that will shape the future of technology. A well-crafted prompt can mould the output into your desired form. It is like an empirical science that starts with a basic prompt and gradually layering on complexity. In an era when AI threatens to wipe out jobs, prompt engineering is an essential skill to learn to stay relevant.

We have curated a list of useful resources for you to master prompt engineering. Let’s take a look.

Read more: An Entire Generation is Studying for Jobs that Won’t Exist

ChatGPT Prompt Book

The ChatGPT Prompt Book consists of over 300 unique writing prompts generated by the ChatGPT language model, designed for creative thinking and finding new ideas and perspectives. The prompts cover a diverse range of subjects and are adaptable to various writing styles and genres, making them useful for writers of all levels of experience, from beginners to professionals.

OpenAI Best Practices

If you are new to the field of prompt engineering, OpenAI’s GitHub repository consisting of best practices for the same is a good resource to start with e.g. the OpenAI Prompt Cookbook.

PromptPapers

Adding to the extensive list of prompt engineering resources is the GitHub repository PromptPapers. This repository contains a collection of essential papers on tuning pre-trained language models using prompts, as well as other useful learning materials.

Midjourney Prompt Helper

Midjourney Prompt Helper is a prompt generator for converting text to image, crafted exclusively for Midjourney and Dall-E, with a focus on ease of use and accessibility for users.

DALL-E Prompt Book

An additional resource for leveraging the complete potential of DALL-E with engaging prompts is the DALL-E Prompt Book, which is a downloadable guide in PDF format that includes fundamental tips for prompts related to photography, illustration, 3D styles, and other topics.

Emergent Mind

Emergent Mind offers a collection of useful ChatGPT examples sourced from different websites. It provides an opportunity to explore a wide range of interesting and entertaining prompts, along with their sources, use cases, and origins. The platform also features a Hotlist, the latest additions, and an all-time best of, making it a valuable resource for discovering new prompts.

Learn Prompting

Besides offering extensive prompt engineering documentation and course, Learn Prompting platform also has a Discord server that has attracted over 1,000 members, making it an excellent platform to exchange ideas, find potential partners for collaboration, or just socialise with like-minded individuals and stay updated.

Read more: Worried About AI Taking Over Your Job? These 5 Prompt Engineering Courses Will Keep You Ahead of the Game!

PromptPerfect

PromptPerfect is a state-of-the-art prompt optimiser tool that enhances prompts for various types of language models, including ChatGPT, Midjourney, DALL.E, and StableDiffusion. With PromptPerfect’s multi-goal optimisation, users can customise the prompt optimisation to their specific needs, such as faster optimisation or shorter prompts.

PromptCrafts-Robotics

PromptCraft-Robotics is a GitHub repository where individuals can test and share interesting prompts that are specifically designed for LLMs related to robotics. The platform offers a robotics simulator, which is integrated with ChatGPT. PromptCraft-Robotics welcomes users to contribute prompts derived from other LLMs, including GPT-3, GPT-4 and Codex, as well as open-sourced models. Submissions are divided into different robotics genres such as manipulation, home robotics, physical reasoning, and many others. The platform encourages users to format their prompt submissions in markdown and specify which LLM they used.

Maximising the Potential of LLMs: A Guide to Prompt Engineering

Maximising the Potential of LLMs: A Guide to Prompt Engineering gives us an overview of how to harness the full potential of LLMs by generating customised prompts for specific use cases. In addition to delving into the nature of LLMs, their capabilities, and their limitations, the guide also offers insights into the various tasks that LLMs can perform.

Prompt Engineering Guide

The Github repository called Prompt Engineering Guide is a treasure trove of resources for those interested in prompt engineering, including learning guides, scientific papers, blog links, tutorials, and datasets. It’s an all-encompassing resource that’s valuable to both developers and practitioners.

Awesome ChatGPT Prompts

Awesome ChatGPT Prompts consist of You can utilize the many prompts found in this repository with ChatGPT. This encourages you to expand the list with your prompts and to use ChatGPT to create new prompts.

Read more: Meet the Computer Scientist Who Solved 50-Year-Old ‘Einstein’ Tiles Problem

The post 12 Resources to Master Prompt Engineering appeared first on Analytics India Magazine.

AI is more likely to cause world doom than climate change, according to an AI expert

Illustration of artificial intelligence

If you've seen any science fiction movie, whether it's a classic like The Terminator or something more recent like Megan, you're probably familiar with the storyline where artificial intelligence threatens the world's safety.

Recent rapid advancements in AI kicked off by the release of ChatGPT may make those threats less of a fiction and more of a reality.

Also: I asked ChatGPT, Bing, and Bard what worries them. Google's AI went Terminator on me

Geoffrey Hinton, deemed as one of the "godfathers of AI" because of his essential contributions to the space, recently quit his position at Alphabet after a decade at the firm because he wanted to speak out about the risks of AI.

In a recent interview with Reuters, Hinton went as far as comparing the risk of AI to those of climate change.

"I wouldn't like to devalue climate change. I wouldn't like to say, 'You shouldn't worry about climate change.' That's a huge risk too," Hinton told Reuters. "But I think this might end up being more urgent."

Also: ChatGPT and the new AI are wreaking havoc on cybersecurity in exciting and frightening ways

He isn't the first to vocalize concerns about the dangers of AI. Other tech leaders including Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and Emad Mostaque, CEO of Stability AI signed a petition to put a halt to giant AI experiments.

However, Hinton's concerns carry significant weight due to his extensive contributions and experience in the field of AI.

In 1986, Hinton co-authored a paper on backpropagation, a milestone that has been critical in today's success of neural network applications and deep learning underlying AI technology. His contributions even earned him a Turing Award in 2018.

Also: How I used ChatGPT and AI art tools to launch my Etsy business fast

Hinton suggests that the topic of AI is more difficult to solve as there isn't a simple remedy one can prescribe to mitigate the potential dangers.

"With climate change, it's very easy to recommend what you should do: You just stop burning carbon. If you do that, eventually things will be okay," Hinton told Reuters. "For this, it's not at all clear what you should do."

Despite the concerns raised, it appears that advancements in AI continue among industry players. Just last week Microsoft opened access to its AI chatbot to everyone and announced major AI updates coming soon to Bing and Edge.