Otter.ai partners with Slack to share meeting insights with your team

new Slack and Otter collab demo hero image

Otter.ai was launched in 2016 and has become a reliable speech-to-text transcription service used by several work productivity applications, including Zoom and Microsoft Teams.

Now, it's coming to Slack in a partnership that will "use AI to bridge the work communications gap", according to Otter.ai.

Also: Google is beefing up AI-powered search on Google Chrome for iOS and Android

Professionals will be able to use Otter's transcription tools before, during, and after meetings for AI-generated insights that will then be shared with appropriate team members via Slack.

Otter will also send reminders in Slack before meetings and will enable participants to view notes in real-time, even if people are running late to the meeting.

During the meeting, Otter will take notes and capture information on slide decks to summarize the points discussed. Call members can also add key takeaways and action items to Otter's notes.

After the meeting, Otter will send assignees their action items in Slack and automatically send teammates an AI-generated summary, as well as human- and AI-generated meeting notes.

Also: Otter.ai's new AI chatbot will automate your next steps after meetings

Sam Liang, co-founder and CEO of Otter.ai, suggested the features will help to boost engagement: "Otter for Slack is the first step in unifying voice and text communication between meetings and Slack — providing conversation continuity among team members."

The combination of features in the app could help professionals curtail time-consuming tasks, such as jotting down notes and debriefing non-attendees. The features could also help professionals avoid meetings that aren't crucial.

However, one potential downside of the app is having to share confidential information with an AI tool.

To take notes and summarize your meetings, Otter has to sit in on your company's meetings, which are likely to contain confidential information. This access leads to concerns about what happens to the information that is saved, where it is stored, how it is stored, and how secure it is.

Also: Generative AI tops Gartner's top 25 emerging technologies for 2023

Tech company Zoom recently faced its own set of privacy challenges after a sneaky change in its Terms of Service, where the firm claimed the right to utilize user content, such as video, audio, and chat data, for its own purposes, including AI projects.

Zoom has since clarified the issue, reassuring customers that it does not use their content to train AI models. However, the controversy did bring more focus to the importance of data privacy when professionals rely on productivity applications.

In its product release, Otter.AI does not address the issue of privacy, which could be the biggest challenge the company faces when trying to convince companies to use its technology.

Artificial Intelligence

Andrew Ng & Cohere Unveil Free Course on LLMs with Semantic Search

Andrew Ng Releases Generative AI with LLMs Course with AWS

AI education guru Andrew Ng has come up with a new course “Large Language Models with Semantic Search” in collaboration with Canadian AI startup Cohere. The free, short, beginner-friendly course will teach you to enhance keyword search using Cohere Rerank. Anyone with basic familiarity with Python can join the course.

We just released "Large Language Models with Semantic Search”, built with @cohere, and taught by @JayAlammar and @SerranoAcademy. Search is a key part of many applications. Say, you need to retrieve documents or products in response to a user query; how can LLMs help? You’ll… pic.twitter.com/1ba5JS9G87

— Andrew Ng (@AndrewYNg) August 16, 2023

Course Highlights

Unlocking Advanced Search Techniques: This course equips learners with the essential techniques to seamlessly incorporate LLMs into keyword search systems.

Dense Retrieval Unveiled: Explore the concept of dense retrieval, a potent NLP tool. By harnessing embeddings, the course elevates the relevance of search results, surpassing traditional keyword-based approaches.

Intelligent Reranking: Gain insights into the reranking process, which injects the intelligence of LLMs into search systems. This strategic integration enhances search efficiency and expedites response times.

Upon completing the course, participants will learn how to use fundamental principles of keyword search, establishing the foundation for search systems that precede the availability of advanced language models. They will apply an innovative reranking technique to transform keyword search, prioritising responses based on query relevance and thereby enhancing the overall search encounter. Participants will also harness the power of embeddings to enable more profound search outcomes through semantic understanding. Practical expertise will be gained through hands-on experience, addressing real-world obstacles with substantial data and refining accuracy in handling diverse search results. Additionally, attendees will master the seamless integration of language model-driven search functionalities into websites or projects, thereby enhancing user engagement and interactions.

Jay Alammar, Director and Engineering Fellow at Cohere and Luis Serrano, Lead of Developer Relations at Cohere are going to teach the course.

Ng is a computer scientist-entrepreneur known for co-founding Coursera and founding DeepLearning.AI. He aims to democratise AI education and currently leads AI4ALL, a nonprofit promoting diversity in the AI workforce. To make this possible, he has partnered with various tech companies like OpenAI, AWS, and LangChain to come up with several AI courses. Sign up for the course here.

Read more: Top 10 Free Specialised Courses by Andrew Ng

The post Andrew Ng & Cohere Unveil Free Course on LLMs with Semantic Search appeared first on Analytics India Magazine.

Meta Could Learn a Thing or Two from OpenAI

Meta is Where OpenAI was Four Years Ago

As one of the leading social media platforms, Meta witnesses a deluge of content being posted on its platforms every second. However, it’s important to acknowledge that not all of this content is guaranteed to be safe and needs to be moderated.

In November last year, Meta announced that it took action against a total of 23 million pieces of harmful content created by Indians across its social media apps Facebook and Instagram.

Filtering out harmful content on Meta’s apps has been a challenge for content moderators as they are continuously bombarded with disturbing and inappropriate content. Extreme visual content can include depictions or actual acts of gore or lethal violence, such as murder, suicide, violent extremism, animal abuse, hate speech, sexual abuse, child or revenge pornography and more.

BBC’s recent report highlighted that Sama, a company entrusted with the task of moderating Facebook posts in East Africa, has come to regret its decision in hindsight. Former employees based in Kenya have expressed distress due to exposure to distressing and graphic content.

To address content moderation problems faced by companies, OpenAI seems to have discovered a potential solution. In a recent blog post, OpenAI stated that businesses can utilize GPT-4 for content moderation purposes.

This is the first time when an LLM is being put to use for this purpose. OpenAI in their blog said that content moderation using GPT-4 results in much faster iteration on policy changes, reducing the cycle from months to hours. It further elaborates that GPT-4 is able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates.

As per OpenAI’s research findings, GPT-4 that underwent training for content moderation outperforms human moderators with basic training. However, both GPT-4 and humans fall short when compared to highly skilled and experienced human moderators.

“We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of a large number of human moderators,” write OpenAI authors Lilian Weng View, Vik Goel and Andrea Vallone.

Can Meta take inspiration from OpenAI?

Meta has adopted AI for content moderation but still the social media giant hasn’t explored yet using LLM for content moderation. It uses AI tools but still largely depends on outsource partners who hire employees for content moderation.

Hiring humans for content moderation has led to a lot of trouble for the social media platform as they are not able to cope up with the gory content. Meta has been trying to automate the content moderation process gradually and it would be nice if they can take inspiration from OpenAI.

OpenAI has taken a lead in the positive direction by exploring the use of GPT-4 for content moderation. OpenAI is not keeping this tool to themselves and anyone with OpenAI’s API access can implement this approach to create their own AI-assisted moderation system.

However, the prospect of Meta adopting the GPT-4 API to address their challenges seems improbable, given the competitive rivalry between the two companies in the realm of generative AI.

It would be nice if Meta can train Llama 2 for content moderation purposes similar to what OpenAI is doing and make it open source. It goes without saying Meta has more experience in content moderation as compared to OpenAI.

In December 2021, Meta came up with a new AI system using a method called “few-shot learning,” in which models start with a general understanding of many different topics and then use much fewer — or sometimes zero — labeled examples to learn new tasks.

Last year, Meta launched a tool called Hasher-Matcher-Actioner (HMA) to be adopted by a range of companies to help them stop the spread of terrorist content on their platforms. It is especially useful for smaller companies who don’t have the same resources as bigger ones. HMA is built on Meta’s previous open source image and video matching software that can be used to flag any type of violating content.

Meta believes in open sourcing tools. If Llama 2, equipped for content moderation, becomes accessible, it could prove invaluable for both companies and Meta itself. This tool would facilitate the removal of content that holds potential harm for consumers along with less dependence on human workforce for content moderation relieving them from the mental burden.

Notably, Meta has also expressed its ongoing efforts to label AI-generated content across its platforms lately.

The post Meta Could Learn a Thing or Two from OpenAI appeared first on Analytics India Magazine.

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Image By Author

In the past few months, Large Language Models (LLMs) have gained significant attention, capturing the interest of developers across the planet. These models have created exciting prospects, especially for developers working on chatbots, personal assistants, and content creation. The possibilities that LLMs bring to the table have sparked a wave of enthusiasm in the Developer | AI | NLP community.

What are LLMs?

Large Language Models (LLMs) refer to machine learning models capable of producing text that closely resembles human language and comprehending prompts in a natural manner. These models undergo training using extensive datasets comprising books, articles, websites, and other sources. By analyzing statistical patterns within the data, LLMs predict the most probable words or phrases that should follow a given input.

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
A timeline of LLMs in recent years: A Survey of Large Language Models

By utilizing Large Language Models (LLMs), we can incorporate domain-specific data to address inquiries effectively. This becomes especially advantageous when dealing with information that was not accessible to the model during its initial training, such as a company’s internal documentation or knowledge repository.

The architecture employed for this purpose is known as Retrieval Augmentation Generation or, less commonly, Generative Question Answering.

What is LangChain

LangChain is an impressive and freely available framework meticulously crafted to empower developers in creating applications fueled by the might of language models, particularly large language models (LLMs).

LangChain revolutionizes the development process of a wide range of applications, including chatbots, Generative Question-Answering (GQA), and summarization. By seamlessly chaining together components sourced from multiple modules, LangChain enables the creation of exceptional applications tailored around the power of LLMs.

Read More: Official Documentation

Motivation?

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Image By Author

In this article, I will demonstrate the process of creating your own Document Assistant from the ground up, utilizing LLaMA 7b and Langchain, an open-source library specifically developed for seamless integration with LLMs.

Here is an overview of the blog’s structure, outlining the specific sections that will provide a detailed breakdown of the process:

  1. Setting up the virtual environment and creating file structure
  2. Getting LLM on your local machine
  3. Integrating LLM with LangChain and customizing PromptTemplate
  4. Document Retrieval and Answer Generation
  5. Building application using Streamlit

Section 1: Setting Up the Virtual Environment and Creating File Structure

Setting up a virtual environment provides a controlled and isolated environment for running the application, ensuring that its dependencies are separate from other system-wide packages. This approach simplifies the management of dependencies and helps maintain consistency across different environments.

To set up the virtual environment for this application, I will provide the pip file in my GitHub repository. First, let’s create the necessary file structure as depicted in the figure. Alternatively, you can simply clone the repository to obtain the required files.

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Image By Author: File Structure

Inside the models' folder, we will store the LLMs that we will download, while the pip file will be located in the root directory.

To create the virtual environment and install all the dependencies within it, we can use the pipenv installcommand from the same directory or simply run setup_env.bat batch file, It will install all the dependencies from the pipfile. This will ensure that all the necessary packages and libraries are installed in the virtual environment. Once the dependencies are successfully installed, we can proceed to the next step, which involves downloading the desired models. Here is the repo.

Section 2: Getting LLaMA on your local machine

What is LLaMA?

LLaMA is a new large language model designed by Meta AI, which is Facebook’s parent company. With a diverse collection of models ranging from 7 billion to 65 billion parameters, LLaMA stands out as one of the most comprehensive language models available. On February 24th, 2023, Meta released the LLaMA model to the public, demonstrating their dedication to open science.

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Image Source: LLaMA

Considering the remarkable capabilities of LLaMA, we have chosen to utilize this powerful language model for our purposes. Specifically, we will be employing the smallest version of LLaMA, known as LLaMA 7B. Even at this reduced size, LLaMA 7B offers significant language processing capabilities, allowing us to achieve our desired outcomes efficiently and effectively.

Official Research Paper : LLaMA: Open and Efficient Foundation Language Models

To execute the LLM on a local CPU, we need a local model in GGML format. Several methods can achieve this, but the simplest approach is to download the bin file directly from the Hugging Face Models repository. In our case, we will download the Llama 7B model. These models are open-source and freely available for download.

If you’re looking to save time and effort, don’t worry — I’ve got you covered. Here’s the direct link for you to download the models ?. Simply download any version of it and then move the file into the models directory within our root directory. This way, you’ll have the model conveniently accessible for your usage.

What is GGML? Why GGML? How GGML? LLaMA CPP

GGML is a Tensor library for machine learning, it is just a C++ library that allows you to run LLMs on just the CPU or CPU + GPU. It defines a binary format for distributing large language models (LLMs). GGML makes use of a technique called quantization that allows for large language models to run on consumer hardware.

Now what is Quantization?

LLM weights are floating point (decimal) numbers. Just like it requires more space to represent a large integer (e.g. 1000) compared to a small integer (e.g. 1), it requires more space to represent a high-precision floating point number (e.g. 0.0001) compared to a low-precision floating number (e.g. 0.1). The process of quantizing a large language model involves reducing the precision with which weights are represented in order to reduce the resources required to use the model. GGML supports a number of different quantization strategies (e.g. 4-bit, 5-bit, and 8-bit quantization), each of which offers different trade-offs between efficiency and performance.

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Quantized Size of Llama

To effectively use the models, it is essential to consider the memory and disk requirements. Since the models are currently loaded entirely into memory, you will need sufficient disk space to store them and enough RAM to load them during execution. When it comes to the 65B model, even after quantization, it is recommended to have at least 40 gigabytes of RAM available. It’s worth noting that the memory and disk requirements are currently equivalent.

Quantization plays a crucial role in managing these resource demands. Unless you have access to exceptional computational resources

By reducing the precision of the model’s parameters and optimizing memory usage, quantization enables the models to be utilized on more modest hardware configurations. This ensures that running the models remains feasible and efficient for a wider range of setups.

How do we use it in Python if it's a C++ library?

That's where Python bindings come into play. Binding refers to the process of creating a bridge or interface between two languages for us python and C++. We will use llama-cpp-pythonwhich is a Python binding for llama.cpp which acts as an Inference of the LLaMA model in pure C/C++. The main goal of llama.cpp is to run the LLaMA model using 4-bit integer quantization. This integration allows us to effectively utilize the LLaMA model, leveraging the advantages of C/C++ implementation and the benefits of 4-bit integer quantization

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Supported Models by llama.cpp : Source

With the GGML model prepared and all our dependencies in place (thanks to the pipfile), it’s time to embark on our journey with LangChain. But before diving into the exciting world of LangChain, let’s kick things off with the customary “Hello World” ritual — a tradition we follow whenever exploring a new language or framework, after all, LLM is also a language model.

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Image By Author: Interaction with LLM on CPU

Voilà !!! We have successfully executed our first LLM on the CPU, completely offline and in a fully randomized fashion(you can play with the hyper param temperature).

With this exciting milestone accomplished, we are now ready to embark on our primary objective: question answering of custom text using the LangChain framework.

Section 3: Getting Started with LLM — LangChain Integration

In the last section, we initialized LLM using llama cpp. Now, let’s leverage the LangChain framework to develop applications using LLMs. The primary interface through which you can interact with them is through text. As an oversimplification, a lot of models are text in, text out. Therefore, a lot of the interfaces in LangChain are centered around the text.

The Rise of Prompt Engineering

In the ever-evolving field of programming a fascinating paradigm has emerged: Prompting. Prompting involves providing specific input to a language model to elicit a desired response. This innovative approach allows us to shape the output of the model based on the input we provide.

It’s remarkable how the nuances in the way we phrase a prompt can significantly impact the nature and substance of the model’s response. The outcome may vary fundamentally based on the wording, highlighting the importance of careful consideration when formulating prompts.

For providing seamless interaction with LLMs, LangChain provides several classes and functions to make constructing and working with prompts easy using a prompt template. It is a reproducible way to generate a prompt. It contains a text string the template, that can take in a set of parameters from the end user and generates a prompt. Let’s take a few examples.

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Image By Author: Prompt with no Input Variables
LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Image By Author: Prompt with one Input Variables
LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Image By Author: Prompt with multiple Input Variables

I hope that the previous explanation has provided a clearer grasp of the concept of prompting. Now, let’s proceed to prompt the LLM.

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Image By Author: Prompting through Langchain LLM

This worked perfectly fine but this ain’t the optimum utilisation of LangChain. So far we have used individual components. We took the prompt template formatted it, then took the llm, and then passed those params inside llm to generate the answer. Using an LLM in isolation is fine for simple applications, but more complex applications require chaining LLMs — either with each other or with other components.

LangChain provides the Chain interface for such chainedapplications. We define a Chain very generically as a sequence of calls to components, which can include other chains. Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, formats it with a Prompt Template, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components.

To understand one let’s create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM using the above individual components that we’ve already created.

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Image By Author: Chaining in LangChain

When dealing with multiple variables, you have the option to input them collectively by utilizing a dictionary. That concludes this section. Now, let’s dive into the main part where we’ll incorporate external text as a retriever for question-answering purposes.

Section 4: Generating Embeddings and Vectorstore for Question Answering

In numerous LLM applications, there is a need for user-specific data that isn’t included in the model’s training set. LangChain provides you with the essential components to load, transform, store, and query your data.

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Data Connection in LangChain: Source

The five stages are:

  1. Document Loader: It is used for loading data as documents.
  2. Document Transformer: It split the document into smaller chunks.
  3. Embeddings: It transforms the chunks into vector representations a.k.a embedding.
  4. Vector Stores: It is used to store the above chunk vectors in a vector database.
  5. Retrievers: It is used for retrieving a set/s of vector/s that are most similar to a query in a form of a vector that is embedded in the same Latent space.

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Document Retrieval / Question-Answering Cycle

Now, we will walk through each of the five steps to perform a retrieval of chunks of documents that are most similar to the query. Following that, we can generate an answer based on the retrieved vector chunk, as illustrated in the provided image.

However, before proceeding further, we will need to prepare a text for executing the aforementioned tasks. For the purpose of this fictitious test, I have copied a text from Wikipedia regarding some popular DC Superheroes. Here is the text:

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Image By Author: Raw Text for Testing

Loading & Transforming Documents

To begin, let’s create a document object. In this example, we’ll utilize the text loader. However, Lang chain offers support for multiple documents, so depending on your specific document, you can employ different loaders. Next, we’ll employ the load method to retrieve data and load it as documents from a preconfigured source.

Once the document is loaded, we can proceed with the transformation process by breaking it into smaller chunks. To achieve this, we’ll utilize the TextSplitter. By default, the splitter separates the document at the ‘nn’ separator. However, if you set the separator to null and define a specific chunk size, each chunk will be of that specified length. Consequently, the resulting list length will be equal to the length of the document divided by the chunk size. In summary, it will resemble something like this: list length = length of doc / chunk size. Let’s walk the talk.

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Image By Author: Loading and Transforming Doc

Part of the journey is the Embeddings !!!

This is the most important step. Embeddings generate a vectorized portrayal of textual content. This has practical significance since it allows us to conceptualize text within a vector space.

Word embedding is simply a vector representation of a word, with the vector containing real numbers. Since languages typically contain at least tens of thousands of words, simple binary word vectors can become impractical due to a high number of dimensions. Word embeddings solve this problem by providing dense representations of words in a low-dimensional vector space.

When we talk about retrieval, we refer to retrieving a set of vectors that are most similar to a query in a form of a vector that is embedded in the same Latent space.

The base Embeddings class in LangChain exposes two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text.

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Image By Author: Embeddings

For a comprehensive understanding of embeddings, I highly recommend delving into the fundamentals as they form the core of how neural networks handle textual data. I have extensively covered this topic in one of my blogs utilizing TensorFlow. Here is the link.

Word Embeddings — Text Representation for Neural Networks

Creating Vector Store & Retrieving Docs

A vector store efficiently manages the storage of embedded data and facilitates vector search operations on your behalf. Embedding and storing the resulting embedding vectors is a prevalent method for storing and searching unstructured data. During query time, the unstructured query is also embedded, and the embedding vectors that exhibit the highest similarity to the embedded query are retrieved. This approach enables effective retrieval of relevant information from the vector store.

Here, we will utilize Chroma, an embedding database and vector store specifically crafted to simplify the development of AI applications incorporating embeddings. It offers a comprehensive suite of built-in tools and functionalities to facilitate your initial setup, all of which can be conveniently installed on your local machine by executing a simple pip install chromadb command.

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Image By Author: Creating Vector Store

Up until now, we’ve witnessed the remarkable capability of embeddings and vector stores in retrieving relevant chunks from extensive document collections. Now, the moment has come to present this retrieved chunk as a context alongside our query, to the LLM. With a flick of its magical wand, we shall beseech the LLM to generate an answer based on the information that we provided to it. The important part is the prompt structure.

However, it is crucial to emphasize the significance of a well-structured prompt. By formulating a well-crafted prompt, we can mitigate the potential for the LLM to engage in hallucination — wherein it might invent facts when faced with uncertainty.

Without prolonging the wait any further, let us now proceed to the final phase and discover if our LLM is capable of generating a compelling answer. The time has come to witness the culmination of our efforts and unveil the outcome. Here we Goooooo ?

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine
Image By Author: Q/A with the Doc

This is the moment we’ve been waiting for! We’ve accomplished it! We have just built our very own question-answering bot utilizing the LLM running locally.

Section 5: Chain All using Streamlit

This section is entirely optional since it doesn’t serve as a comprehensive guide to Streamlit. I won’t delve deep into this part; instead, I’ll present a basic application that allows users to upload any text document. They will then have the option to ask questions through text input. Behind the scenes, the functionality will remain consistent with what we covered in the previous section.

However, there is a caveat when it comes to file uploads in Streamlit. To prevent potential out-of-memory errors, particularly considering the memory-intensive nature of LLMs, I’ll simply read the document and write it to the temporary folder within our file structure, naming it raw.txt. This way, regardless of the document’s original name, Textloader will seamlessly process it in the future.

Currently, the app is designed for text files, but you can adapt it for PDFs, CSVs, or other formats. The underlying concept remains the same since LLMs are primarily designed for text input and output. Additionally, you can experiment with different LLMs supported by the Llama C++ bindings.

Without delving further into intricate details, I present the code for the app. Feel free to customize it to suit your specific use case.

Here’s what the streamlit app will look like.

LangChain + Streamlit + Llama: Bringing Conversational AI to Your Local Machine

This time I fed the plot of The Dark Knight copied from Wiki and just asked Whose face is severely burnt? and the LLM replied — Harvey Dent.

All right, all right, all right! With that, we come to the end of this blog.

I hope you enjoyed this article! and found it informative and engaging. You can follow me Afaque Umer for more such articles.

I will try to bring up more Machine learning/Data science concepts and will try to break down fancy-sounding terms and concepts into simpler ones.

Afaque Umer is a passionate Machine Learning Engineer. He love tackling new challenges using latest tech to find efficient solutions. Let's push the boundaries of AI together!

Original. Reposted with permission.

More On This Topic

  • DIY Automated Machine Learning with Streamlit
  • Streamlit for Machine Learning Cheat Sheet
  • LangChain Cheat Sheet
  • LangChain 101: Build Your Own GPT-Powered Applications
  • Transforming AI with LangChain: A Text Data Game Changer
  • Unveiling the Power of Meta's Llama 2: A Leap Forward in Generative AI?

Mass Tech Layoffs Ease, AI Upskilling Gains Momentum

The beginning of 2023 was a stressful time for employees at the Silicon Valley. Salesforce, Alphabet, Amazon, Microsoft, Spotify, Zoom and others collectively fired approximately 106,950 employees, making it worse than the combined job losses in November and December 2022 (50,573 workers and 40,368 employees, respectively). It seems like the worst is over. The tech employees can now finally take a sigh of relief since job cuts have hit a 7 month low.

In the US, mass layoffs have declined 90% post June compared to January 2023. Similar is the case for the Indian economy as per ET report. The firing in the tech sector is further expected to cool down, according to an analysis citing proprietary data.

The companies seem to be moving to a non-conventional strategy. Instead of following the usual firing-hiring cycle, the tech firms have chosen to retain their existing employees and upskill their engineering skills.

Mellowed drama

The decline in letting go of employees is not unusual, according to Challenger, Gray & Christmas Senior Vice President Andrew Challenger, who noted June is typically the slowest month for major reductions.

The economic downturn can be blamed upon the glut of staffing created during the COVID-19 pandemic. Currently, a softened demand of tech workforce impends the market to cut costs. Furthermore, Nitin Bhatt, partner & technology sector leader at EY believes that the hiring volumes may not see the same levels as FY21 or FY22. Hiring, where it happens, will be mostly of the replacement variety or for niche skills. “Caution and scrutiny around hiring will continue in the tech sector in the second half of the year,” he stated.

“Should the uncertainty around pipeline conversion, project cancellations, ramp-downs and slower ramp-ups for awarded projects continue, one can expect net hiring in H2 to be negative for the industry,” Bhatt added.

Moving forward

Tech firms which decided to let go of its employees aren’t yet ramping up hiring, despite requiring workers with special AI skills. On the contrary, companies have introduced reskilling and upskilling programmes for its workforce due to the surge in interest for AI across industries.

For instance, even though Indian software leader Infosys has been stern about not letting go of its employees, the market witnessed a 46% fall in the company’s net hiring during the fiscal year 2023. Furthermore, the IT giant has launched a comprehensive and free AI certification training programme through Infosys Springboard to empower individuals with the necessary skills to succeed in the job market.

The company also signed a Memorandum of Understanding (MoU) with Skillsoft, a virtual education platform to revamp education about technology. The content for the program ranges from basic to advanced level courses covering digital transformation, AI and ML, data science, cloud, cybersecurity, and effective communication and presentation.

Since the generative AI wave has spread like wildfire, companies have been betting big on the emerging technology. Fintech biz Capital One cut over 1100 roles in its technology segment earlier in January. The company stated, it’s hiring AI and ML engineers, but it’s also upskilling current engineers. Speaking to the Washington Post, Abhijit Bose, a senior vice president, said the company has already trained over 100 engineers through its six-month program.

Similarly, Salesforce made it to the news for eliminating 10% of its workforce which is 8000 employees. Fast forward to June, the company has announced courses covering AI basics, generative AI, the fundamentals of natural language processing, and more to help companies upskill their respective employees with generative AI opportunities.

As 2023 progresses, the trajectory of layoffs appears to be stabilizing, and the industry is shifting its focus towards upskilling talent rather than cutting it loose. With the industry giants leading the way in upskilling their employees’ expertise in AI, it is likely that the sector will emerge with a workforce better equipped to thrive in the generative AI landscape.

The post Mass Tech Layoffs Ease, AI Upskilling Gains Momentum appeared first on Analytics India Magazine.

$1M Salary Package: AI Companies Pour Money for GenAI Roles

While everyone is riding the generative AI wave, we might as well encash it. The ones equipped with AI skills seem to be the kings in the current genAI wave.

With companies offering exorbitantly high salaries for AI-related roles, it’s the best place to be in. Last month, Netflix was in the limelight for offering a salary of up to $900,000 for the role of a product manager on their machine learning platform team. The news came in at a time when the Hollywood writer’s strike was ongoing. While it caused a lot of hullabaloo, Netflix is not the only one which is ready to handsomely pay for AI-related roles. A number of companies are following suit. Is the AI salary rage warranted?

According to Indeed, there are multiple generative AI roles offered by big tech companies including Meta, NVIDIA, Anthropic, Microsoft, Adobe and many others, where the salaries offered go up to as high as half a million dollars. A technical product manager in AI safety, in Anthropic, is being offered salaries of up to $520,000, and a principal engineer AI in HubSpot, gets $427,000.

It’s not only tech or AI companies that are offering such excessively high salaries. Consumer and services companies that are implementing AI to transform their products are willing to pay high salaries too. Dating app Hinge was looking to hire a VP for AI to oversee their app’s AI strategy, for a salary of $398,000. The role will entail leading a team of data scientists, and ML engineers to develop AI features. Retail corporation Walmart was also looking to hire a senior manager for its conversational AI platform for a salary of up to $252,000 a year.

All In One

With generative AI taking centre-stage, the influence on the job market is evident. As per AIM Research, the generative AI job market has witnessed a steady growth from January to June of this year. There are over 4200+ generative AI-related jobs in the US and it has risen by 20% in May. Furthermore, job roles have been modified to suit the current trend. The role of a generative AI engineer that did not exist earlier will now require the competencies of that of a deep learning, ML, NL, and software engineer.

Almost like a mandatory need, the multiple roles are now a necessity. The amalgamation of multiple roles has been descriptively placed under ‘qualifications section’ for these open job roles that are offering huge salaries. For instance, the role of ‘Senior Research Scientist-generative AI’ in NVIDIA, that offers a salary of up to $414,000 a year, a candidate should not only possess a thorough knowledge of python/C++ programming skills, but also an excellent knowledge of theory and practice of deep learning, computervision, natural language processing or computer graphics. The candidate should also be a Ph.D holder in Computer Science/Engineering, Electrical Engineering, or any related field.

Similarly, a ‘Product Technical Program Manager-generative AI’ for Meta, with a salary package of up to $297,000, requires technical and leadership experience. The candidate must have experience developing large-scale ML/AI platforms such as dataset generation, feature development, model testing and support the development of AI-powered product experiences such as NLP, computer-vision, ranking and personalisation.

The Layoffs-Hiring Balance

Interestingly, the large layoffs that happened across big tech at the start of the year, seems to have minimal impact with the way things are unfolding now. Scale AI, a data platform for AI that provides training data for ML teams, had laid off 20% of their workforce in January. However, last month, ScaleAI posted a job opening in Indeed for a ‘software engineer- generative AI’ offering up to $215,000 in salary.

There are even companies that have laid off employees owing to AI chatbots and efficient processes with generative AI implementation. In May, executive outplacement and career consulting firm Challenger, Gray & Christmas, attributed 4000 job losses to artificial intelligence, making it the first time for the company to mention AI as a cause of job loss. Indian ecomm platform, for merchants, Dukaan recently laid off 90% of their support staff replacing them with their new AI chatbot.

Though big tech layoffs have occurred owing to recession or automation, it doesn’t seem to throw cold water over the ambitious hiring process that companies have started. While it looks promising at the moment, it is to be seen how long the generative AI hiring wave will remain.

Jobs requiring Generative AI skills grew almost 9X since January, according to Adzuna. Jobs ask for an average of 5+ years of experience with Generative AI tools and frameworks like GPT-4 or PaLM-2.

— Vin Vashishta (@v_vashishta) June 8, 2023

The post $1M Salary Package: AI Companies Pour Money for GenAI Roles appeared first on Analytics India Magazine.

An Excellent Resource To Learn The Foundations Of Everything Underneath ChatGPT

An Excellent Resource To Learn The Foundations Of Everything Underneath ChatGPT
Image by Freepik

OpenAI, ChatGPT, the GPT-series, and Large Language Models (LLMs) in general – if you are remotely associated with the AI profession or a technologist, chances are high that you’d hear these words in almost all your business conversations these days.

And the hype is real. We can not call it a bubble anymore. After all, this time, the hype is living up to its promises.

Who would have thought that machines could understand and revert in human-like intelligence and do almost all those tasks previously considered human forte, including creative applications of music, writing poetry, and even programming applications?

The ubiquitous proliferation of LLMs in our lives has made us all curious about what lies underneath this powerful technology.

So, if you are holding yourself back because of the gory-looking details of algorithms and the complexities of the AI domain, I highly recommend this resource to learn all about “What Is ChatGPT Doing … and Why Does It Work?”

An Excellent Resource To Learn The Foundations Of Everything Underneath ChatGPT
Image from Stephen Wolfram Writings

Yes, that's the title of the article by Wolfram.

Why am I recommending this? Because it is crucial to understand the absolute essentials of machine learning and how deep neural networks are related to human brains before learning about Transformers, LLMs, or even Generative AI.

It looks like a mini-book which is literature on its own, but take your time with the length of this resource.

In this article, I will share how to start reading it to make the concepts easier to grasp.

Understanding the ‘Model’ Is Crucial

Its key highlight is the focus on the ‘model’ part of “Large Language Models”, illustrated by an example of the time it takes the ball to reach the ground from each floor.

An Excellent Resource To Learn The Foundations Of Everything Underneath ChatGPT
Image from Stephen Wolfram Writings

There are two ways to achieve this – repeating this exercise from each floor or building a model that could compute it.

In this example, there exists an underlying mathematical formulation that makes it easier to calculate, but how would one estimate such a phenomenon using a ‘model’?

The best bet would be to fit a straight line for estimating the variable of interest, in this case, time.

A more profound read into this section would explain that there is never a “model-less model”, which seamlessly takes you to the varied deep learning concepts.

The Core Of Deep Learning

You will learn that a model is a complex function that takes in certain variables as input and results in an output, say a number in digit recognition tasks.

The article goes from digit recognition to a typical cat vs. dog classifier to lucidly explain what features are picked by each layer, starting with the outline of the cat. Notably, the first few layers of a neural network pick out certain aspects of images, like the edges of objects.

An Excellent Resource To Learn The Foundations Of Everything Underneath ChatGPT
Image by Freepik

Key Terminologies

In addition to explaining the role of multiple layers, multiple facets of deep learning algorithms are also explained, such as:

Architecture Of Neural Networks

It is a mix of art and science, says the post – “But mostly things have been discovered by trial and error, adding ideas and tricks that have progressively built significant lore about how to work with neural nets”.

Epochs

Epochs are an effective way to remind the model of a particular example to get it to “remember that example”

Since repeating the same example multiple times isn’t enough, it is important to show different variations of the examples to the neural net.

Weights (Parameters)

You must have heard that one of the LLMs has whopping 175B parameters. Well, that shows how the structure of the model varies based on how the knobs are adjusted.

Essentially, parameters are the “knobs you can turn” to fit the data. The post highlights that the actual learning process of neural networks is all about finding the right weights – “In the end, it’s all about determining what weights will best capture the training examples that have been given”

Generalization

The neural networks learn to “interpolate between the shown examples in a reasonable way”.

This generalization helps to predict unseen records by learning from multiple input-output examples.

Loss Function

But how do we know what is reasonable? It is defined by how far the output values are from the expected values, which are encapsulated in the loss function.

It gives us a “distance between the values we’ve got and the true values”. To reduce this distance, the weights are iteratively adjusted, but there must be a way to systemically reduce the weights in a direction that takes the shortest path.

Gradient Descent

Finding the steepest path to descent on a weight landscape is called gradient descent.

It is all about finding the correct weights that best represent the ground truth by navigating the weight landscape.

Backpropagation

Continue reading through the concept of backpropagation, which takes the loss function and works backward to progressively find weights to minimize the associated loss.

Hyperparameters

In addition to weights (aka the parameters), there are hyperparameters that include different choices of the loss function, loss minimization, or even choosing how big a “batch” of examples should be.

Neural Networks For Complex Problems

The use of neural networks for complex problems is widely discussed. Still, the logic underneath such an assumption was unclear until this post which explains how multiple weight variables in a high-dimensional space enable various directions that can lead to the minimum.

Now, compare this with fewer variables, which implies the possibility of getting stuck in a local minimum with no direction to get out.

Conclusion
With this read, we have covered a lot of ground, from understanding the model and how human brains work to taking it to neural nets, their design, and associated terminologies.

Stay tuned for a follow-up post on how to build upon this knowledge to understand how chatgpt works.
Vidhi Chugh is an AI strategist and a digital transformation leader working at the intersection of product, sciences, and engineering to build scalable machine learning systems. She is an award-winning innovation leader, an author, and an international speaker. She is on a mission to democratize machine learning and break the jargon for everyone to be a part of this transformation.

More On This Topic

  • Can ChatGPT Be Trusted as an Educational Resource?
  • Using ChatGPT to Learn SQL
  • Learning How to Use ChatGPT to Learn Python (or anything else)
  • Top Free Resources To Learn ChatGPT
  • Great New Resource for Natural Language Processing Research and…
  • Visual ChatGPT: Microsoft Combine ChatGPT and VFMs

Diamond Cut Diamond: Amazon Combats AI-Generated Reviews with AI 

Amazon recently introduced AI-generated customer review highlights, which present concise summaries of common themes and sentiments from written reviews, helping shoppers quickly gauge if a product suits their needs. The summarisation tool, currently undergoing testing since earlier this year, is available to select mobile users in the U.S.

A new AI-generated feature enables product insights and allows easy access to reviews highlighting specific attributes of products like “ease of use” or “performance.”

By leveraging AI to present review highlights and encouraging authentic feedback. Amazon strives to make the shopping journey clearer and more transparent for its customers.

Essentially, the technology derives its functionality from Amazon’s Community Guidelines, which act as parameters for its machine learning models in analysing multiple data points to detect risks and expert investigators in fraud-detection techniques to prevent fake reviews. The analysis encompasses various data points, such as account relationships, sign-in activities, review histories, and indicators of unusual behaviour.

“Our goal is to ensure that every review in Amazon’s stores is trustworthy and reflects customers’ actual experiences. Amazon welcomes authentic reviews—whether positive or negative —but strictly prohibits fake reviews that intentionally mislead customers,” said David Montague, Vice President of Selling Partner Risk, at Amazon.

“We continue to innovate on our proactive technology to detect fake reviews and other indications of unusual behaviour,” he added.

This is on the same line as the company’s ‘Rekognition Content Moderation’ system which it uses to review harmful images in product reviews. The system combines machine learning and human-in-the-loop review, starting with a 40% automated image decision and gradually improving. Some self-managed models were transitioned to Amazon Rekognition Detect Moderation API for better accuracy.

This migration streamlined architecture, reducing effort and costs. The accuracy of Rekognition Content Moderation decreased human review needs and expenses, yielding significant benefits for product review moderation.

Amazon is strategically incorporating artificial intelligence into its product offerings. Instead of emphasising prominent AI chatbots or imaging tools, the company is concentrating on services that enable developers to build their own generative AI tools using its AWS cloud infrastructure.

Recently, it partnered with Meta to launch its LLaMa 2 and run it on AWS. Although Amazon wouldn’t share the details on its AI/ML models, the “review summarisation” tool could well be based upon Meta’s LLaMa 70B model.

Earlier this year, Amazon’s CEO, Andy Jassy, said that generative AI holds significant implications for the company’s future. This is evidenced by the ongoing generative AI initiatives across Amazon’s various business units.

More Questions,

However, this recent announcement about using AI to combat fake reviews raises questions about potential bias in the summary generation process. While AI can condense vast amounts of information into summaries, there’s concern that Amazon’s profit motives might influence how the AI presents information.

This could lead to favouring high-margin products and established brands, potentially causing disadvantages to small-size sellers with limited marketing budgets.

Legal Action

Amazon’s customer reviews have been a vital part of its platform since 1995, therefore, it’s smart to keep improving their utility with AI. However, with the advent of fake review brokers, the review has lost credibility to a huge extent. Reports indicate that up to 40% of reviews on the platform are potentially fake.

Amazon’s commitment to combating fake reviews is further demonstrated by its recent legal action against brokers suspected of promoting the creation of fraudulent Amazon reviews.

“Another way we fight fake reviews is through legal action. Not only are we targeting the source of the problem but we’re sending a clear message that there’s no place for abuse in our stores and we will hold fraudsters accountable,” said Montague.

The Federal Trade Commission also recently proposed a rule to ban deceptive online reviews, aiming to enhance credibility. The rule’s development, starting in 2019, has involved cases against misleading claims and fake reviews. The proposed rule prohibits selling or soliciting fake reviews, including fabricated profiles, AI-generated content, and reviews from non-users, with penalties for violations.

Other prohibited activities include buying positive/negative reviews for any product, allowing reviews from leadership/affiliates without proper disclosure, operating review sites as “independent” for one’s products, suppressing reviews through threats/intimidation, and selling fake engagement metrics like followers and video views.

Scope for More

Amazon’s efforts have yielded results, as the company reported blocking over 200 million suspected fake reviews in the past year using these methods. The retail platform acknowledges that a collaborative approach involving private sector entities, consumer groups, and governments is crucial for effectively addressing the problem.

Despite Amazon’s endeavours, consumer groups believe that more needs to be done to combat the widespread issue of fake reviews. While Amazon’s use of AI and legal actions against fake review operators have shown progress, the consumer group emphasises the need for stronger legislative measures and further cooperation to ensure a genuine and trustworthy online shopping experience.

The post Diamond Cut Diamond: Amazon Combats AI-Generated Reviews with AI appeared first on Analytics India Magazine.

Banks defending their right to security are missing the point about consumer trust

bank vault

With market figures indicating cybersecurity attacks are increasing in volume and sophistication, it's not surprising that businesses will seek ways to better safeguard their assets. Banks, in particular, want bigger moats since they have more to lose.

However, fortified defenses inevitably mean legitimate users will have to burrow deeper to get access to services. The result is a perennial debate about finding the right balance between security and usability.

Also: 4 ways to avoid clicking malicious links that everyone online should know

And it seems one bank in Singapore might need to address that balance after it introduced a security function that left several of its customers frustrated.

OCBC last week rolled out a feature that locks out access to its digital banking services if mobile apps that have not been downloaded from unofficial app stores, such as Google Play Store and Huawei AppGallery, are detected on the user's device.

Citing the need to protect customers against malware, the bank said this "enhancement" enables its app to identify errant apps on the customer's device. The security feature also checks the permission settings of apps against what the bank deems to present potential risks or that are commonly used by malware-laced apps.

Also: This bank's new app security feature irks customers

When apps that do not meet both criteria are detected, customers will not be able to log in to their account via OCBC's mobile app or online-banking site until they uninstall or remove the "rogue" apps.

This high level of security sounded great — until complaints started popping up. Customers found themselves locked out, even though apps flagged by the bank's new security feature had actually been downloaded from official app stores. These apps included Microsoft Authenticator, LG ThinQ, CCleaner, and Trend Micro. Even apps that were cleared by customers' own antivirus mobile apps were tagged as risky by the OCBC security feature.

Affected customers said the bank's recommended solution of deleting and reinstalling the specific apps from official app stores did not work.

For most cases, OCBC's response was standard — the new security feature is part of an efforts to combat fraud and "safeguard our customers" from suspected malicious apps. "We apologize for any inconvenience caused," it said several times over to irate customers on its Facebook page. "We seek your patience as this feature is aimed to safeguard customers from malware scams."

Also: The best VPN services (and tips to choose the right one for you)

This situation seems like a case where security has trumped usability. I was relieved, having read the anecdotes of aggrieved OCBC customers, that I had chosen to bank with another firm. But then industry regulator Monetary Authority of Singapore (MAS) stepped up to voice its support for the bank's security feature.

"Security measures will come with some measure of added inconvenience for customers, but they are necessary to maintain security of and confidence in digital banking," MAS said. "Coupled with a vigilant and discerning public, robust security measures will help us strengthen our defense against scams."

In view of the regulator's cheerleading role, I'm now anticipating that the remaining two major local banks, including mine, will follow suit some time in the very near future and roll out a similar security "enhancement".

Perhaps OCBC is serving penance for taking centerstage in last year's phishing scams, or maybe it lost a game of rock, paper, scissors, and was picked to be the first bank to roll out the security feature — and, hence, had to bear the brunt of customer ire?

Also: How to protect and secure your password manager

Whatever the case, OCBC's muddled launch leaves much to be desired and throws up questions that the whole industry, including its regulator, will need to address collectively.

Consumer trust and shared responsibility

First, let's get one thing straight. This isn't simply a question of privacy, but of user trust. When things don't work the way they're supposed to work, trust will erode.

Use only apps from official app stores and you're good, OCBC customers were assured. But that approach turned out to be problematic.

Also: 8 habits of highly secure remote workers

'Oh, then your app's permission settings are the issue,' customers were told. However, the bank has remained coy about the details of what these permission settings are, presumably so the bad guys aren't tipped off about how to circumvent these flags.

More generally, the lack of information, and transparency, means users are left wondering what exactly is so wrong with the apps — apps that they had downloaded from official stores and that were built by legitimate companies. Does that mean the likes of Microsoft, LG, and Trend Micro are releasing apps that contain security risks, as deemed by OCBC?

And if that isn't the case, does that mean apps are being mistakenly identified by a major bank's security 'enhancement'? A security enhancement that should have been rigorously checked and tested and checked again before it's released to the public?

How much trust, therefore, should consumers put in a security feature that is unable to properly distinguish between legitimate apps and those that carry actual risks?

Also: These experts are racing to protect AI from hackers

To top it off, users are being told their decisions on how they want to operate their devices are invalid. In other words, this security enhancement is implying 'remove your naughty apps or you can't use ours'.

So, when businesses overwrite a customer's decision on how they want their devices to be secured, does it make them fully liable when a breach occurs? I believe it potentially should, since the customer has little say in the apps, including antivirus tools, that they can have on their phone if they wish to continue accessing their bank account.

I recently had a similar conversation with some industry folks, during which I mentioned a personal peeve with regards to app permissions and organizations' inability, or unwillingness, to explain why they need access to features that are unnecessary to facilitate their services.

It was then suggested to me that the lack of transparency might be buffered by the assurance that these businesses, in their own interests, would not want to develop an app that put their customers at risk, hence, damaging their own brand reputation.

I would argue that this stance shouldn't absolve customers from taking responsibility for their own security posture.

In fact, the Singapore government, perhaps to the delight of businesses, has repeatedly emphasized the need for consumers to assume shared responsibility in safeguarding their cyber hygiene.

"The ongoing fight against scams requires an ecosystem approach, with all stakeholders playing their part in staying vigilant and guarding against scams," MAS had said. The regulator is working on a liability framework that it says will make clear the roles and responsibilities of financial institutions, telcos, and customers to be vigilant against online scams.

Also: 5 easy steps to keep your smartphone safe from hackers

For the sake of their customers (and my sanity), I hope the other banks set to follow in OCBC's footsteps have been taking notes and working to ensure they avoid a similarly messy rollout.

For instance, could OCBC have mitigated some of the issues by offering customers a personal 'whitelist' to which they can include apps initially flagged by the bank's security feature? These apps could be checked and assessed against security policies, and added to the whitelist only after they've been ascertained to be safe.

Banks could put a cap of, say, three apps in the whitelist, so customers are motivated to prioritize apps that are absolutely necessary and banks can manage the resources needed to facilitate this approach. They can also use artificial intelligence tools to automate some processes and optimize the app assessment cycle, as well as maintain a repository of approved ones, further reducing the effort required to upkeep the whitelist.

And if they're not already doing so, banks should be in touch with major app developers, including antivirus software vendors, on how their permission settings may or may not pass their security checklist. That's assuming they, too, are choosing not to divulge specifics behind app permissions they consider to be risky.

Also: Stop using your 4-digit iPhone passcode in public. Do this instead

Above all, the one key question all banks will want to ask themselves is whether they're prepared to take full liability in the event of a security breach, should they choose to overwrite their customers' security choices.

Featured

OpenAI Teams Up with Stainless to Unveil Version 4 of TypeScript/Node SDK 

In collaboration with Stainless, a platform for high-quality, easy-to-use APIs, OpenAI has announced the release of Version 4 of their TypeScript/Node SDK for the OpenAI API.

We’re thrilled to release a major new version of our TypeScript / Node SDK for the OpenAI API! 🔥
Version 4 offers a huge set of improvements – some of the highlights include:
– Streaming responses for chat & completions
– Carefully crafted TypeScript types
– Support for ESM,… pic.twitter.com/5lHPOQ4jeP

— Logan.GPT (@OfficialLoganK) August 16, 2023

The update includes a new set of improvements such as streaming responses for chat and completions, improved TypeScript types, compatibility with ESM, Vercel edge functions, Cloudflare workers, and Deno, a better file upload API for Whisper, fine-tune files, and DALL-E images, enhanced error handling through automatic retries and error classes, improved performance via TCP connection reuse, and simpler initialisation logic. Several users have taken to X to share their excitement about the project.

For more information on how to access the new version, you can check out their Github, NPM and migration guide.

Additionally, OpenAI has acquired the team at AI design studio Global Illumination to work on our core products including ChatGPT. The team, with prior experience in creating and developing products during the initial stages at Instagram and Facebook, has also played a substantial role in advancing various projects at renowned companies like YouTube, Google, Pixar, Riot Games, and other prominent firms.

All of these updates come amid the ChatGPT website experiencing reduced user activity. By the end of July, the user base further decreased by 12% to 1.5 billion users compared to 1.7 billion in June, as reported by SimilarWeb, excluding API usage.

Read more: OpenAI Might Go Bankrupt by the End of 2024

The post OpenAI Teams Up with Stainless to Unveil Version 4 of TypeScript/Node SDK appeared first on Analytics India Magazine.