Hugging Face and ServiceNow release a free code-generating model

Hugging Face and ServiceNow release a free code-generating model Kyle Wiggers 7 hours

AI startup Hugging Face and ServiceNow Research, ServiceNow’s R&D division, have released StarCoder, a free alternative to code-generating AI systems along the lines of GitHub’s Copilot.

Code-generating systems like DeepMind’s AlphaCode; Amazon’s CodeWhisperer; and OpenAI’s Codex, which powers Copilot, provide a tantalizing glimpse at what’s possible with AI within the realm of computer programming. Assuming the ethical, technical and legal issues are someday ironed out (and AI-powered coding tools don’t cause more bugs and security exploits than they solve), they could cut development costs substantially while allowing coders to focus on more creative tasks.

According to a study from the University of Cambridge, at least half of developers’ efforts are spent debugging and not actively programming, which costs the software industry an estimated $312 billion per year. But so far, only a handful of code-generating AI systems have been made freely available to the public — reflecting the commercial incentives of the organizations building them (see: Replit).

StarCoder, which by contrast is licensed to allow for royalty-free use by anyone, including corporations, was trained on over 80 programming languages as well as text from GitHub repositories, including documentation and programming notebooks. StarCoder integrates with Microsoft’s Visual Studio Code code editor and, like OpenAI’s ChatGPT, can follow basic instructions (e.g., “create an app UI”) and answer questions about code.

Leandro von Werra, a machine learning engineer at Hugging Face and a co-lead on StarCoder, claims that StarCoder matches or outperforms the AI model from OpenAI that was used to power initial versions of Copilot.

“One thing we learned from releases such as Stable Diffusion last year is the creativity and capability of the open-source community,” von Werra told TechCrunch in an email interview. “Within weeks of the release the community had built dozens of variants of the model as well as custom applications. Releasing a powerful code generation model allows anybody to fine-tune and adapt it to their own use-cases and will enable countless downstream applications.”

Building a model

StarCoder is a part of Hugging Face’s and ServiceNow’s over-600-person BigCode project, launched late last year, which aims to develop “state-of-the-art” AI systems for code in an “open and responsible” way. ServiceNow supplied an in-house compute cluster of 512 Nvidia V100 GPUs to train the StarCoder model.

Various BigCode working groups focus on subtopics like collecting datasets, implementing methods for training code models, developing an evaluation suite and discussing ethical best practices. For example, the Legal, Ethics and Governance working group explored questions on data licensing, attribution of generated code to original code, the redaction of personally identifiable information (PII), and the risks of outputting malicious code.

Inspired by Hugging Face’s previous efforts to open source sophisticated text-generating systems, BigCode seeks to address some of the controversies arising around the practice of AI-powered code generation. The nonprofit Software Freedom Conservancy among others has criticized GitHub and OpenAI for using public source code, not all of which is under a permissive license, to train and monetize Codex. Codex is available through OpenAI’s and Microsoft’s paid APIs, while GitHub recently began charging for access to Copilot.

For their parts, GitHub and OpenAI assert that Codex and Copilot — protected by the doctrine of fair use, at least in the U.S. — don’t run afoul of any licensing agreements.

“Releasing a capable code-generating system can serve as a research platform for institutions that are interested in the topic but don’t have the necessary resources or know-how to train such models,” von Werra said. “We believe that in the long run this leads to fruitful research on safety, capabilities and limits of code-generating systems.”

Unlike Copilot, the 15-billion-parameter StarCoder was trained over the course of several days on an open source dataset called The Stack, which has over 19 million curated, permissively licensed repositories and more than six terabytes of code in over 350 programming languages. In machine learning, parameters are the parts of an AI system learned from historical training data and essentially define the skill of the system on a problem, such as generating code.

The Stack

A graphic breaking down the contents of The Stack dataset. Image Credits: BigCode

Because it’s permissively licensed, code from The Stack can be copied, modified and redistributed. But the BigCode project also provides a way for developers to “opt out” of The Stack, similar to efforts elsewhere to let artists remove their work from text-to-image AI training datasets.

The BigCode team also worked to remove PII from The Stack, such as names, usernames, email and IP addresses, and keys and passwords. They created a separate dataset of 12,000 files containing PII, which they plan to release to researchers through “gated access.”

Beyond this, the BigCode team used Hugging Face’s malicious code detection tool to remove files from The Stack that might be considered “unsafe,” such as those with known exploits.

The privacy and security issues with generative AI systems, which for the most part are trained on relatively unfiltered data from the web, are well-established. ChatGPT once volunteered a journalist’s phone number. And GitHub has acknowledged that Copilot may generate keys, credentials and passwords seen in its training data on novel strings.

“Code poses some of the most sensitive intellectual property for most companies,” von Werra said. “In particular, sharing it outside their infrastructure poses immense challenges.”

To his point, some legal experts have argued that code-generating AI systems could put companies at risk if they were to unwittingly incorporate copyrighted or sensitive text from the tools into their production software. As Elaine Atwell notes in a piece on Kolide’s corporate blog, because systems like Copilot strip code of its licenses, it’s difficult to tell which code is permissible to deploy and which might have incompatible terms of use.

In response to the criticisms, GitHub added a toggle that lets customers prevent suggested code that matches public, potentially copyrighted content from GitHub from being shown. Amazon, following suit, has CodeWhisperer highlight and optionally filter the license associated with functions it suggests that bear a resemblance to snippets found in its training data.

Commercial drivers

So what does ServiceNow, a company that deals mostly in enterprise automation software, get out of this? A “strong-performing model and a responsible AI model license that permits commercial use,” said Harm de Vries, the lead of the Large Language Model Lab at ServiceNow Research and the co-lead of the BigCode project.

One imagines that ServiceNow will eventually build StarCoder into its commercial products. The company wouldn’t reveal how much, in dollars, it’s invested in the BigCode project, save that the amount of donated compute was “substantial.”

“The Large Language Models Lab at ServiceNow Research is building up expertise on the responsible development of generative AI models to ensure the safe and ethical deployment of these powerful models for our customers,” de Vries said. “The open-scientific research approach to BigCode provides ServiceNow developers and customers with full transparency into how everything was developed and demonstrates ServiceNow’s commitment to making socially responsible contributions to the community.”

StarCoder isn’t open source in the strictest sense. Rather, it’s being released under a licensing scheme, OpenRAIL-M, that includes “legally enforceable” use case restrictions that derivatives of the model — and apps using the model — are required to comply with.

For example, StarCoder users must agree not to leverage the model to generate or distribute malicious code. While real-world examples are few and far between (at least for now), researchers have demonstrated how AI like StarCoder could be used in malware to evade basic forms of detection.

Whether developers actually respect the terms of the license remains to be seen. Legal threats aside, there’s nothing at the base technical level to prevent them from disregarding the terms to their own ends.

That’s what happened with the aforementioned Stable Diffusion, whose similarly restrictive license was ignored by developers who used the generative AI model to create pictures of celebrity deepfakes.

But the possibility hasn’t discouraged von Werra, who feels the downsides of not releasing StarCoder aren’t outweighed by the upsides.

“At launch, StarCoder will not ship as many features as GitHub Copilot, but with its open-source nature, the community can help improve it along the way as well as integrate custom models,” he said.

The StarCoder code repositories, model training framework, dataset-filtering methods, code evaluation suite and research analysis notebooks are available on GitHub as of this week. The BigCode project will maintain them going forward as the groups look to develop more capable code-generating models, fueled by input from the community.

There’s certainly work to be done. In the technical paper accompanying StarCoder’s release, Hugging Face and ServiceNow say that the model may produce inaccurate, offensive, and misleading content as well as PII and malicious code that managed to make it past the dataset filtering stage.

Wix Vs WordPress: Young Upstart Challenges Old-Timer With DevTools

Wix has risen to prominence over the past few years as a no-code WordPress alternative. However, as the platform grew and added more features, it became more difficult for webmasters to adapt over time. To combat this issue, Wix released a new visual integrated development environment (IDE) called Codux.

Due to its nature as a visual-first IDE, Codux has proven itself to be a go-between for developers and website designers. While not built to be a standalone IDE, Codux comes as part of Wix’s strategy of creating a growing arsenal of developer tools.

Wix <3 React

Codux is an IDE built specifically for coding in React, Wix’s go-to JS framework. The platform is also extremely visual in nature, providing an in-IDE renderer that will show a visual preview of what the code would look like on the website. Most of the platform’s power lies in making quick modifications with UI elements, with coding mostly being relegated to the sidelines.

In fact, Codux wasn’t even made to be a standalone IDE, instead being designed to work alongside the main IDE. Nadav Abrahami, the co-founder and head of innovation at Wix, said, “[Codux is not] a replacement for your IDE. Currently, like most developers, I use an IDE of my own choice set up just the way I like it. I won’t quickly replace it with something else, and I wouldn’t expect you to do that either.”

Instead of replacing a primary IDE, Codux aims to provide a new way to interface with pre-written code. Moreover, the visual interface shows code changes in real-time, allowing for closer collaborations between designers and developers. The IDE has support for TypeScript, Git, Sass JS, and, most importantly, React.

This comes as no surprise, as Wix is invested in the development of React through the React Native Partners program, further incentivising them to build out a React-powered accompanying tech stack. The web development platform has also contributed to the open-source community with projects like a navigation package for React Native.

Codux comes as a part of Wix’s effort to expand the scope of their website development solutions. In the recent past, the company also launched Velo, a full-stack web development platform, and Stylable, a CSS preprocessor for managing visual styles.

Moreover, when viewed against WordPress, one of the biggest competitors of Wix, the open-source website builder begins to show its age.

WordPress’ Worst Nightmare?

WordPress first grew to prominence in the late 2000s as a way to get websites up and running with minimal code. It is an open source project built on PHP and MySQL — a far cry from the React and Angular-powered custom websites of today.

Not only are modern JS frameworks faster and lighter, they are also built for modern web development practices like lazy loading. There are also a vast number of unoptimised themes and plugins available on WordPress, resulting in many WP-powered websites becoming slow and sluggish.

With the tech stack built up by Codux and Velo, Wix provides an accessible solution that goes beyond simply a no-code website builder. These offerings also place Wix in a different market when compared to products like Squarespace and Shopify, elevating its target demographic from citizen developers to full-fledged web devs.

This is a sentiment shared by Codux users, as stated by user no_wizard on Hacker News Forum, “One [thing] I’d like to use it for, is to take the React components created for our design system and give designers the ability to run with it to create mock ups. This would cut alot of translation overhead and boilerplate significantly while keeping design and engineering on the same “page”.”

Apart from bringing teams on-board, it seems that developers who were getting into React also found Codux a perfect environment to learn the language. Pavlo Lisovyi, a product manager at KLA, said in a blog “While learning React, I needed a simple sandbox environment — and I found it without leaving Codux. I just created a new board and started coding, with a preview of rendered output and all the tools mentioned above.”

Going back to Abraham’s quote, it doesn’t seem like Codux is out to take over traditional IDEs. It is just another step in Wix’s product strategy to increase their market share and provide a fuller stack of web dev tools to citizen developers and professional developers alike.

The post Wix Vs WordPress: Young Upstart Challenges Old-Timer With DevTools appeared first on Analytics India Magazine.

The Ultimate Open-Source Large Language Model Ecosystem

The Ultimate Open-Source Large Language Model Ecosystem
Image by Author

We're witnessing an upsurge in open-source language model ecosystems that offer comprehensive resources for individuals to create language applications for both research and commercial purposes.

Previously, we have highlighted Open Assistant and OpenChatKit. Today, we'll delve into GPT4ALL, which extends beyond specific use cases by offering comprehensive building blocks that enable anyone to develop a chatbot similar to ChatGPT.

What is the GPT4ALL Project?

GPT4ALL is a project that provides everything you need to work with state-of-the-art natural language models. You can access open source models and datasets, train and run them with the provided code, use a web interface or a desktop app to interact with them, connect to the Langchain Backend for distributed computing, and use the Python API for easy integration.

Apache-2 licensed GPT4All-J chatbot was recently launched by the developers, which was trained on a vast, curated corpus of assistant interactions, comprising word problems, multi-turn dialogues, code, poems, songs, and stories. To make it more accessible, they have also released Python bindings and a Chat UI, enabling virtually anyone to run the model on CPU.

You can try it yourself by installing native chat-client on your desktop.

  • Mac/OSX
  • Windows
  • Ubuntu

After that, run the GPT4ALL program and download the model of your choice. You can also download models manually here and install them in the location indicated by the model download dialog in the GUI.

The Ultimate Open-Source Large Language Model Ecosystem
Image by Author

I have had a perfect experience using it on my laptop, receiving fast and accurate responses. Additionally, it is user-friendly, making it accessible even to non-technical individuals.

The Ultimate Open-Source Large Language Model Ecosystem
Gif by Author GPT4ALL Python Client

The GPT4ALL comes with Python, TypeScript, Web Chat interface, and Langchain backend.

In this section, we will look into the Python API to access the models using nomic-ai/pygpt4all.

  1. Install the Python GPT4ALL library using PIP.
pip install pygpt4all
  1. Download a GPT4All model from http://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin. You can also browse other models here.
  2. Create a text callback function, load the model, and provide a prompt to mode.generate() function to generate text. Check out the library documentation to learn more.
from pygpt4all.models.gpt4all import GPT4All      def new_text_callback(text):      print(text, end="")      model = GPT4All("./models/ggml-gpt4all-l13b-snoozy.bin")  model.generate("Once upon a time, ", n_predict=55, new_text_callback=new_text_callback)  

Moreover, you can download and run inference using transformers. Just provide the model name and the version. In our case, we are accessing the latest and improved v1.3-groovy model.

from transformers import AutoModelForCausalLM    model = AutoModelForCausalLM.from_pretrained(      "nomic-ai/gpt4all-j", revision="v1.3-groovy"  )  

Getting Started

The nomic-ai/gpt4all repository comes with source code for training and inference, model weights, dataset, and documentation. You can start by trying a few models on your own and then try to integrate it using a Python client or LangChain.

The GPT4ALL provides us with a CPU quantized GPT4All model checkpoint. To access it, we have to:

  • Download the gpt4all-lora-quantized.bin file from Direct Link or [Torrent-Magnet].
  • Clone this repository and move the downloaded bin file to chat folder.
  • Run the appropriate command to access the model:
    • M1 Mac/OSX: cd chat;./gpt4all-lora-quantized-OSX-m1
    • Linux: cd chat;./gpt4all-lora-quantized-linux-x86
    • Windows (PowerShell): cd chat;./gpt4all-lora-quantized-win64.exe
    • Intel Mac/OSX: cd chat;./gpt4all-lora-quantized-OSX-intel

You can also head to Hugging Face Spaces and try out the Gpt4all demo. It is not official, but it is a start.

The Ultimate Open-Source Large Language Model Ecosystem
Image from Gpt4all

Resources:

  • Technical Report: GPT4All-J: An Apache-2 Licensed Assistant-Style Chatbot
  • GitHub: nomic-ai/gpt4all
  • Python API: nomic-ai/pygpt4all
  • Model: nomic-ai/gpt4all-j
  • Dataset: nomic-ai/gpt4all-j-prompt-generations
  • Hugging Face Demo: Gpt4all
  • ChatUI: nomic-ai/gpt4all-chat: gpt4all-j chat

GPT4ALL Backend: GPT4All — ???? LangChain 0.0.154
Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master's degree in Technology Management and a bachelor's degree in Telecommunication Engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.

More On This Topic

  • Top Open Source Large Language Models
  • Baize: An Open-Source Chat Model (But Different?)
  • 7 Top Open Source Datasets to Train Natural Language Processing (NLP) &…
  • Google’s Model Search is a New Open Source Framework that Uses Neural…
  • Learn About Large Language Models
  • Bark: The Ultimate Audio Generation Model

Salesforce Unveils Slack GPT, Enabling Conversational AI for Work

Today at World Tour NYC, Salesforce announced Slack GPT, a new conversational AI experience natively integrated into Slack that will transform how work gets done.

Slack will bring AI natively into the user experience to help customers work smarter, learn faster, and communicate better. For example, AI-powered conversation summaries and writing assistance will be available directly in Slack, all with just a click.

With the newly released Slack platform, customers will be able to build no-code workflows that embed AI actions with simple prompts at each step, making it easy for anyone to deploy AI automation.

They can also securely integrate a large language model (LLM) from companies like OpenAI and Anthropic, or use the LLM of their choice. This includes those funded by Salesforce Venture’s generative AI fund, and in the future, Salesforce’s proprietary LLMs.

Most importantly, the AI is customisable to a company’s unique needs — whether they want to integrate a language model of choice, build their own AI-powered no-code workflows, or bring AI effortlessly into the Slack experience.

“The real power of this technology is when AI can analyse and act on the most valuable data from a company’s most trusted resource — its own internal knowledge. Slack GPT is the conversational AI platform of the future, helping organisations easily tap into their trusted customer data and essential employee knowledge so they can work smarter and make smarter decisions faster,” Lidiane Jones, CEO of Slack, said.

The post Salesforce Unveils Slack GPT, Enabling Conversational AI for Work appeared first on Analytics India Magazine.

The new AI-powered Bing is now open to everyone — with some serious upgrades

bing-chat on a laptop

The release of Bing's AI-powered search engine nearly three months ago placed Bing ahead in the AI race, even earning it the title of ZDNET's overall best AI chatbot. Because the chatbot was under limited preview, however, not everyone had access to it — until today.

Starting Thursday, Bing Chat is moving from limited preview to open preview, meaning everyone can access the chatbot without the need to join a pesky waitlist.

Also: ChatGPT vs. Bing Chat: Which AI chatbot is better for you?

All you now need to do to access the preview now is sign into Bing with your Microsoft account and you're all set.

Although the the new Bing is open to everyone, it is still in preview, meaning your feedback is still welcome and sought after by Microsoft.

"By shifting over from a limited reading to an open preview, we're expecting to be able to learn more, and as a result, be able to customize the experience based on user feedback," Dena Saunders, Partner General Manager, Bing, told ZDNET.

Also: This new technology could blow away GPT-4 and everything like it

Under the open preview, users will not only be able to access all the popular and useful features of the new Bing, based on GPT-4, but they will also access a ton of new updates that Microsoft unveiled in Thursday's blog post.

We rounded up the biggest upgrades below, and coming from someone who demo-ed them all, you will want to try these.

Incorporation of new visual elements

Sometimes the best answer to a question can be expressed through a visual element; after all, a picture is worth a thousand words, right? Bing is harnessing this idea by bringing its visual features such as Knowledge Cards to the chat.

For example, in the demo, the chatbot was asked "How deep is the Indian Ocean?" It was then able to provide a graphic showing the actual ocean and its depths.

If the question you asked the chatbot could be better answered with a visual element, like a graph or chart, the chatbot will provide one.

For example, when asked "Brazil cities by population" the chatbot produced the chart below.

One of the biggest features of OpenAI's GPT-4 language learning model is its ability to accept both text and image inputs and output human-like text. Microsoft is now harnessing this technology into its own chatbot.

For example, in the demo the user uploaded a photo of a crocheted octopus and asked how to make it. The new Bing was able to read the image to produce a response.

Lastly, Bing's Image creator will now work with all languages in Bing, allowing users to generate AI images from text in their native language.

View your chats at a later date

A couple of handy updates are coming to the chat interface to make conversations easier to access at a later time.

Also: How to use Midjourney to generate amazing images

Starting soon, users will be able to export and share their conversations with the chatbot. The export feature will allow users to take the exact conversation within the chatbot and export it to other tools such as Microsoft Word.

This feature can be especially useful when using the bot to create something for you, such as an essay, an image or even a chart.

If you want to revisit a conversation at a later time but don't necessarily want to export it, users will be able to save conversations in their chat history soon.

Microsoft Edge updates

Microsoft Edge's interface is getting a facelift. Some of the appearance changes include rounded corners, organized containers and semi-transparent visual elements which will be available to users soon.

Also: How to use Microsoft Edge's integrated Bing AI Image Creator

In addition to the aesthetics upgrade, new features will be arriving to Edge to improve user productivity.

For example, improved summarization capabilities will allow Edge to summarize long documents such as PDFs, and longer form websites from the sidebar.

Also: This AI chatbot can sum up a PDF and answer questions you have about it

In the demo, the user asked the chatbot a specific question about the PDF opened on the left hand side of the screen, and within seconds, the chatbot produced an answer in the bar on the right.

With Actions in Edge, users will be able to complete a task in the browser simply by giving the chatbot a command. For example, in the sidebar you can ask the chatbot to create tab groups for you and it will do so for you.

You can also ask the chatbot to play a movie for you and then it will show you options of where you can stream it and play the movie for you at the touch of a button.

Plug-ins for third parties

Similarly to ChatGPT, Microsoft is working on building third-party plug-ins into the chat experience.

Two examples Microsoft provided were a Wolfram Alpha plug-in which would allow for advanced mathematical calculations and graphs to occur in Bing Chat and an Open Table plug-in which would give the chatbot the ability to make reservations for you.

More on AI tools

Microsoft doubles down on AI with new Bing features

Microsoft doubles down on AI with new Bing features

The company's betting the farm on generative AI

Kyle Wiggers 9 hours

Microsoft is embarking on the next phase of Bing’s expansion. And — no surprise — it heavily revolves around AI.

At a preview event this week in New York City, Microsoft execs including Yusuf Mehdi, the CVP and consumer chief marketing officer, gave members of the press including this reporter a look at the range of features heading to Bing over the next few days, weeks and months.

They don’t so much reinvent the wheel as they build on what Microsoft has injected into the Bing experience over the past three months or so. Since launching Bing Chat, its AI-powered chatbot powered by OpenAI’s GPT-4 and DALL-E 2 models, Microsoft says that visitors to Bing — which has grown to exceed 100 million daily active users — have engaged in over half a billion chats and created over 200 million images.

Looking ahead, Bing will become more visual, thanks to more image- and graphic-centric answers in Bing Chat. It’ll also become more personalized, with capabilities that’ll allow users to export their Bing Chat histories and draw in content from third-party plugins (more on those later). And it’ll embrace multimodality, at least in the sense that Bing Chat will be able to answer questions within the context of images.

“I think it’s safe to say that we’re underway with the transformation of search,” Mehdi said in prepared remarks. “In our minds, we think that today will be the start of the next generation of this ‘search mission.'”

Open, and visual

As of today, the new Bing — the one with Bing Chat — is now available waitlist-free. Anyone can try it out by signing in with a Microsoft Account.

It’s more or less the experience that launched several months ago. But as alluded to earlier, Bing Chat will soon respond with images — at least where it makes sense. Answers to questions (e.g. “Where is machu picchu?”) will be accompanied by relevant images if any exist, much like the standard Bing search flow but condensed into a card-like interface.

Microsoft Bing Chat

Answers with visuals, new in Bing Chat.

In a demo at the event, a spokesperson typed the question “Does the saguaro cactus grow flowers?” and Bing Chat pulled up a paragraph-long response alongside an image of the cactus in question. For me, it evoked the “knowledge panels” in Google Search.

Microsoft isn’t saying which categories of content, exactly, might trigger an image. But it does have filtering in place to prevent explicit images from appearing — or so it claims.

Sarah Bird, the head of responsible AI at Microsoft, told me that Bing Chat benefits from the filtering and moderation already in place with Bing search. Beyond this, Bing Chat uses a combination of “toxicity classifiers,” or AI models trained to detect potentially harmful prompts, and blacklists to keep the chat relatively clean.

Those measures didn’t prevent Bing Chat from going off the rails when it first rolled out in preview in early February, it’s worth noting. Our coverage found the chatbot spouting vaccine misinformation and writing a hateful screed from the perspective of Adolf Hitler. Other reporters got it to make threats, claim multiple identities and even shame them for admonishing it.

In another knock against Microsoft, the company just a few months ago laid off the ethics and society team within its larger AI organization. The move left Microsoft without a dedicated team to ensure its AI principles are closely tied to product design.

Bird, though, asserts that meaningful progress has been made and that these sorts of AI issues aren’t solved overnight — public though Bing Chat may be. Among other measures, a team of human moderators is in place to watch for abuse, she said, such as users attempting to use Bing Chat to generate phishing emails.

But — as members of the press weren’t given the chance to interact with the latest version of Bing beyond curated demos — I can’t say to what extent all that’s made a difference. It’ll doubtless become clear once more folks get their hands on it.

One aspect of Bing Chat that is improving is the transparency around its responses — specifically responses of a fact-based nature. Soon, when asked to summarize a document or about the contents a document (e.g. “what does this page say about the Brooklyn Bridge?”), whether a 20-page PDF or a Wikipedia article, Bing Chat will include citations indicating from where in the text the information came from. Clicking on them will highlight the corresponding passage.

Productivity emergent

In another new feature on the visual front, Bing Chat will be able to create charts and graphs when fed the right prompt and data. Previously, asking something like “Which are the most populous cities in Brazil?” would yield a basic list of results. But in a near-future preview, Bing Chat will present those results visually and in the chart type of a user’s choosing.

This seemingly represents a step for Bing toward a full-blown productivity platform, particularly when paired with the enhanced text-to-image generation capabilities coming down the pipeline.

Microsoft Bing Chat

The Image Creator in Bing Chat.

In the coming weeks, Bing Image Creator — Microsoft’s tool that can generate images from text prompts, powered by DALL-E 2 — will understand more languages aside from English (over 100 total). As with English, users will be able to refine the images they generate with follow-up prompts (e.g. “Make an image of a bunny rabbit,” followed by “now make the fur pink”).

Generative art AI has been in the headlines a lot, lately — and not for the most optimistic of reasons necessarily.

Plaintiffs have brought several lawsuits against OpenAI and its rival vendors, alleging that copyrighted data — mostly art — was used without their permission to train generative models like DALL-E 2. Generative models “learn” to create art and more by “training” on sample images and text, usually scraped indiscriminately from the public web.

I asked Bird about whether Microsoft is exploring ways to compensate creators whose work was swept up in training data, even if the company’s official position is that it’s a matter of fair use. Several platforms launching generative AI tools, including Shutterstock, have kick-started creators funds along these lines. Others, like Spawning, are creating mechanisms to let artists opt out of AI model training altogether.

Bird implied that these issues will eventually have to be confronted — and that content creators deserve some form of recompense. But she wasn’t willing to commit to anything concrete this week.

Multimodal search

Elsewhere on the image front, Bing Chat is gaining the ability to understand images as well as text. Users will be able to upload images and search the web for related content, for example copying a link to an image of a crocheted octopus and asking Bing Chat the question “how do I make that?” to get step-by-step instructions.

Multimodality powers the new page context function in the Edge app for mobile, as well. Users will be able to ask questions in Bing Chat related to the mobile page they’re viewing.

Microsoft wouldn’t say either way, but it seems likely that these new multimodal abilities stem from GPT-4, which can understand images in addition to text. When OpenAI announced GPT-4, it didn’t make the model’s image understanding capabilities available to all customers — and still hasn’t. I’d wager that Microsoft, though, being a major investor in and close collaborator with OpenAI, has some sort of privileged access.

Any image upload tool can be abused, of course, which is why Microsoft is employing automated filtering and hashing to block illicit uploads, according to Bird. The jury’s out on how well these work, though — we weren’t given the chance to test image uploads ourselves.

New chat features

Multimodality and new visual features aren’t all that’s coming to Bing Chat.

Soon, Bing Chat will store users’ chat histories, letting them pick up where they left off and return to previous chats when they wish. It’s an experience akin to the chat history feature OpenAI recently brought to ChatGPT, showing a list of chats and the bot’s responses to each of those chats.

The specifics of the chat history feature have yet to be ironed out, like how long chats will be stored, exactly. But users will be able to delete their history at any time regardless, Microsoft says — addressing the criticisms several European Union governments had against ChatGPT.

Microsoft Bing Chat

Exporting and sharing chats from Bing Chat.

Bing Chat will also gain export and share functionalities, letting users share conversations on social media or to a Word document. Dena Saunders, a partner GM in Microsoft’s web experiences team, told TechCrunch that a more robust copy-and-paste system is in the works — but not in preview just yet — for graphs and images created through Bing Chat.

Perhaps the most transformative addition to Bing Chat, though, is plugins. From partners like OpenTable and Wolfram Alpha, plugins greatly extend what Bing Chat can do, for example helping users book a reservation or create visualizations and get answers to challenging science and math questions.

Like chat history, the not-yet-live plugins functionality is in the very preliminary stages. There’s no plugins marketplace to speak of; plugins can be toggled on or off from the Bing Chat web interface.

Saunders hinted, but wouldn’t confirm, that the Bing Chat plugins scheme was associated with — or perhaps identical to — OpenAI’s recently introduced plugins for ChatGPT. That’d certainly make sense, given the similarities between the two.

Edge, refreshed

Bing Chat is available through Edge as well as the web, of course. And Edge is getting a fresh coat of paint alongside Bing Chat.

First previewed in February, the new and improved Edge features rounded corners in line with Microsoft’s Windows 11 design philosophy. Elements in the browser are now more “containerized,” as one Microsoft spokesperson put it, and there’s subtle tweaks throughout, like the Microsoft Account image moving left-of-center.

In Compose, Edge’s Bing Chat-powered tool that can write emails and more given a basic prompt (e.g. “write an invitation to my dog’s birthday party”), a new option lets users adjust the length, phrasing and tone of the generated text to nearly anything they’d like. Type in the desired tone, and Bing Chat will write a message to match — Bird says filters are in place to prevent the use of clearly problematic tones, like “hateful” or “racist.”

Far more intriguing than Compose, though — at least to me — are actions in Edge, which translate certain Bing Chat prompts into automations.

Typing a command like “bring my passwords from another browser” in Bing Chat in the Edge sidebar opens Edge’s browsing data settings page, while the prompt “play ‘The Devil Wears Prada'” pulls up a list of streaming options including Vudu and (predictably) the Microsoft Store. There’s even an action that automatically organizes — and color-coordinates — browsing tabs.

Microsoft Bing Chat

Edge actions in… action.

Actions are in a primitive stage at present. But it’s clear where Microsoft’s going, here. One imagines actions eventually expanding beyond Edge to reach other Microsoft products, like Office 365, and perhaps one day the whole Windows desktop.

Saunders wouldn’t confirm or deny that this is the endgame. “Stay tuned for Microsoft Build,” she told me, referring to Microsoft’s upcoming developer conference. We shall.

How boAt is Rocking the Technology Landscape 

In a world where startups are constantly pouring vast amounts of resources into developing their own proprietary tech stack, boAt has distinguished itself by taking a unique approach – i.e. outsourcing its task to multiple tech partners, thereby eliminating the need for an unnecessary overhead burden for the company by developing legacy applications with tech talent. This includes Google, SAP, Shopify and others.

At the same time, boAt has a small but dedicated research and development team of 35 technologists and data scientists, allowing boAt to maintain a lean and mean organisational structure, providing a distinct advantage in a highly competitive marketplace.

“We focus on FMEG product quality, branding and marketing, and not on homegrown technology development”, said Shashwat Singh, CIO at boAt, told AIM, at SAP Now India held in Mumbai last month.

The company is also betting big on Generative AI. For instance, to improve customer experience management, which includes tasks like intuition, moderation, intent detection, and sentiment analysis, it has partnered with a technology enabler. boAt did not reveal the name due to a confidential agreement, at the time of discussion.

“We have partnered with a company that already has some of these capabilities, who also happen to be working closely with OpenAI to improve them at a suitable cost and scale,” shared Singh.

However, last week boAt partnered with Amplify.ai and Meta to enhance the personalised conversational experience for its users. This chatbot has been ideated by Meta’s Creative Shop and built in partnership with Amplify.ai.

Sailing against the wind

boAt has its ups and downs. It has been cognizant of scaling its technology stack. For instance, boAt worked with Tally in the earlier days, but it migrated to SAP only a year ago. But why?

Singh said that SAP offers better feature functionality compared to its competitors, meeting all their requirements. Secondly, SAP had a strong partner ecosystem, enabling them to find more adaptable and flexible Tier 2 or Tier 3 partners. Lastly, the SAP RISE program offered them the flexibility to focus on their core business processes while outsourcing the management of infrastructure and uptime.

Further, he said that Tally lacks a system of stock or inventory management and is primarily focused on accounts, resulting in a lack of integration between inventory and financial systems, making it challenging to assess inventory position.

Singh said that there were lesser challenges from a product perspective, but the implementation did have its difficulties: challenges in getting the right business processes digitised on SAP, especially when it came to integrating legacy warehousing tools. But, this was resolved by the team instantly. Singh credited SAP’s project management team for helping them with the implementation and keeping track of the progress of the project.

Why SAP, and not others?

Singh told AIM that when it comes to scoring correct ERP solutions, SAP’s competitors like Microsoft Dynamics and Oracle Fusion fall behind. In boAt’s experience, partner ecosystem and support, as well as feature functionality coverage, are major downfalls for Microsoft Dynamics and Oracle Fusion.

Besides that, SAP’s tech footprint has increased with solutions like SAP EWM and SAP TM. Comparatively, Microsoft Dynamics lacks a strong ecosystem and features on the supply chain side. While it may be cheaper in the short term to develop an in-house ERP, it would become more expensive to manage and maintain it in the long run. Eventually, the cost-benefit of off-the-shelf ERP products will be cheaper than doing something else.

Read more: Enterprises Die for Domain Expertise Over New Technologies

Navigating data management challenges

boAt has already set up a fully functional data lake with SAP data and other data sources, including Google Analytics, Shopify, and unit data. The company intend to traverse the entire information continuum from operational and descriptive reporting to predictive, prescriptive, and insight. It has already gotten all operational reports live on the data lake and are now exploring analytics use cases that can have a direct impact on the top line.

Despite all of these efforts, it still succumbs to limited data. There exists a dearth of historical data for predictive analysis to work.

“We have limited historical data, which means we are not yet able to utilise predictive or prescriptive insights. Our primary use case would have been demand forecasting, but given that 80-85% of our sales come from marketplaces like Amazon and Flipkartwhich are extremely fluctuating, demand sensing is not effective in that context,” explained Singh.

Instead, they are focusing on identifying product gaps by analysing the voice of the consumer to help in the new product introduction process. They don’t require a lot of historical data, but they are focusing on big data and leveraging the voice of the consumer since they don’t get first-party data. Singh aims to transform boAt into an intelligent enterprise that relies on insights and data rather than gut decisions.

In a world where e-commerce giants are gobbling up vast amounts of data, many enterprises like boAt are struggling to keep up. Yet, despite this challenge, boAt has managed to carve out a place for itself as the fifth-largest wearable brand in the world. How did they do it? By taking a unique approach that emphasises outsourcing their tech tasks and focusing on marketing and branding. So if you’re struggling to keep up with the data-driven landscape of modern business, take a page from boAt’s playbook. Emphasise your strengths, outsource your weaknesses, and focus on what you do best.

<write a nice conclusion>

The post How boAt is Rocking the Technology Landscape appeared first on Analytics India Magazine.

‘AI Transformation Will be Bigger Than Cloud Transformation for Enterprises’: Manuvir Das, NVIDIA

It’s a busy time for NVIDIA’s VP of Enterprise Computing, Manuvir Das. After all, it isn’t just AI that’s at an inflection point, enterprise AI too is at crossroads. Das, who reports directly to company chief Jensen Huang, is convinced that the AI revolution in enterprise will be as big as the cloud transformation, if not bigger.

Bigger than the cloud transformation

“Let me explain why the AI transformation will be bigger. With the cloud transformation, it changed how the IT teams worked but AI is expected to impact all the employees of the company. It will make every employee more productive, not just the IT specialists.

“If you consider a bank, customer service agents will become more productive because they will have an AI assistant to help them work faster and more accurately. Finance people will become more productive because they will get better reports generated automatically instead of spending hours and hours. The HR employees will become more productive because they’ll get an automatic analysis of which team is performing well and which one is performing badly. So, the impact will be quite widespread,” Das said.

He explains what the next era of enterprise computing will look like. “Until now, any enterprise company that has benefited from AI has had to learn a lot about AI themselves, so they can be the practitioner. But now with generative AI, you have all these other companies like Slack, Microsoft, ServiceNow and SAP, whose products are used by thousands of companies. And if they incorporate generative AI into their products, then enterprises can benefit from it. This will be the big shift in the industry,” he stated.

Shift in cloud computing segment

But Das admits that integrating generative AI into enterprises seems deceptively simple. “Everybody looks at ChatGPT and thinks, ‘Hey, cool! I’m gonna use ChatGPT too,’ but the reality isn’t so simple,” he said.

Das noted down three things that need to be considered. “The first is that every company has its own data as well that’s highly proprietary and confidential. The second thing is depending on what your company is doing, the skills needed from the model are going to be different. For example, you can teach the model how to write a code or you can teach the model how to generate an expense report.

“And then the third thing is that in a business setting, it’s equally important what the model isn’t allowed to say or do. Say, if I’m a bank, and I’m using one of these models for customer service then you obviously don’t want your chatbot to be answering such sensitive questions. So, we introduced the concept of guardrails,” he stated.

NVIDIA recently launched the NeMo Guardrails as a part of the toolkit

According to Das, giant foundational models like GPT-4 are only the starting point for these enterprises because they have a lot of general knowledge from the internet and some general text prediction. “What an enterprise company actually needs is their own models that are highly specialised for a good performance,” he continued.

Increased competition in chipmaking

The wheels of change will also overhaul other major landscapes. Das predicts that the cloud segment too will alter because of AI. “The way in which cloud has been stored in data centres, it’s been these traditional servers with CPUs not GPUs in them. This traditional model of computing does not really work well for AI, especially in generative AI, because of the lack of accelerated computing that GPUs offer.

If you look ten years from now, most servers in the cloud are probably going to have GPUs because they really are the only efficient and cost effective way to do AI,” Das stated.

But even as the technology itself evolves at a fast pace, the industry itself also changes. A number of Big Tech players like Google and Microsoft are all working on making AI chips increasing competitiveness in an expensive industry. But Das responded saying that NVIDIA doesn’t necessarily fear this. “The work that these companies are doing is a validation of the accelerated computing approach that NVIDIA brought to the world. The way we designed the GPU and the reason why it has stood the test of time is because we built an interface like CUDA and we’ve developed this whole ecosystem. So, our approach to making chips is fairly general purpose and they can be used for many different use cases. Every time we come up with a new generation of chips, it’s on the same software interface so developers don’t have to learn anything new,” he said.

Instead, Das said that NVIDIA encourages innovation. “We want to constantly improve our own GPUs as well at the same time. We’re very comfortable with the curve of innovation and our GPUs because of our years of knowledge. This isn’t the first version that we’ve built,” he said.

NVIDIA’s Enterprise partners

And there’s good reason for Das’ confidence. NVIDIA has a slew of partners to spread its enterprise technology.

“If we take the analogy of cars, NVIDIA builds the engine. Our role is to build a really good platform and keep innovating on it. For this strategy to actually work well, we need lots of partnerships with these other people.

We focus on cloud providers like Azure, Microsoft, Google, Amazon and Oracle, who are all working very hard to make their platform the better choice for enterprise customers, in terms of cost and capabilities. We’ve been working with them for several years now to integrate all our hardware and software products onto the public cloud. So, we don’t compete with them, we help them.

Our second type of natural partnerships are the ones we have with customers who have their own datasets and put servers into their data centres. They buy data centres from Dell, HP or Lenovo and we work with them to design servers with our GPUs.

The third type of partnerships which are newly developing are the next wave of companies like, say, CoreWeave which aren’t as big but want to build a specialised cloud just for accelerated computing. As accelerated computing gains importance, they are becoming providers for this and want to use our GPUs in their data centres. Equinix is another good example of this.

Our fourth kind of partner is a typical enterprise company like a bank which decides to go through cloud transformation, and that becomes a five-year long process. They don’t do it on their own and instead sign up with a Global Service Integrator (GSI) like Accenture or Infosys who we partner with. Like now, we have a big partnership with Deloitte and even work with Accenture, Infosys.

And finally, the reason that NVIDIA has been successful is because there’s a crucial partner that people don’t stop and think about, which is the software developer.

So, when we built our GPU, we built an SDK called CUDA that developers use to access our GPU and write the application. We’ve put in a lot of effort into helping developers use our platform which is why we have around 3.5 million developers in our ecosystem. These are ultimately the people who create the software for AI and they are very precious to us,” Das explained.

Scope in India

For Das, India is a crucial piece of the puzzle considering some of the biggest GSIs in the world are based in India. “NVIDIA has big partnerships with companies like Wipro, TCS and so on. We are also working closely with government institutions in India because, as you know, the Indian government has started a lot of initiatives to be technologically independent. There have been efforts to build supercomputers in India and these have been built using NVIDIA’s tech.

Then, there are other massive companies in the telecom sector like Reliance, or in retail companies which really benefit from AI and accelerated computing and we have partnered with these companies too,” he said.

The post ‘AI Transformation Will be Bigger Than Cloud Transformation for Enterprises’: Manuvir Das, NVIDIA appeared first on Analytics India Magazine.

Quantum Computing Propels Data Science into the Future

The world of technology is constantly evolving and professionals need to stay up-to-date with the latest tools, technologies, and trends in their field to meet the needs of their organisations. One such technology that is gaining widespread attention for its potential to shape the future of data processing and handling is quantum computing.

Computing plays a critical role in the entire data science pipeline, from capturing and maintaining data, to processing and analysing it, and ultimately communicating or taking action based on the insights. And computational challenges are often associated with statistical analysis.

Yazhen Wang, Chair and Professor of the Department of Statistics at the University of Wisconsin, explains in an article that statistical approaches that are mathematically optimal may not always be computationally possible, while data analysis methods that are computationally efficient may not be statistically optimal.

But, data continues to increase in both scale and complexity, and models used in fields such as deep learning are becoming more complex. With this, Wang says, the development of computational techniques involved in data science, from chips to software to systems, is becoming increasingly challenging.

“As the amount of data available to generate an effective analysis and recommendation is increasing, new models will be required to enable integrating data from an increasing variety of sources,” said Sanjay Pandit, Senior Engineering Director in Unisys India. “And new analytics and mathematical computations are necessary to improve the output speed and quality of recommendations.”

But to make it a business reality, Pandit mentions that there must be a paradigm shift in mindset to integrate compute-intensive algorithms for the future. This can be seen from two perspectives. First, from the creators of data models, where the expansion in capacity and timely, optimised output provided by quantum computing allows them to utilise more complex algorithms and expect quicker results than in the past. Having a grounding in quantum-relevant mathematical concepts may be helpful to align data models with compute processing.

Second, from the consumers of data models, who can consider leveraging data from multiple sources, challenging current algorithms, and striving for faster business outcomes.

Role reversal

But, it is not just quantum computing advancing data science, there is also explored the potential of data science to advance quantum computing.

Quantum certification is a process that ensures quantum devices are performing correctly by using protocol approaches such as testing and assessing their properties. This process requires data science to help calibrate and validate the properties of quantum devices.

One type of quantum algorithm, called a quantum walk-based algorithm, such as Grover’s algorithm, has larger variance than its classical counterpart. These algorithms are random and can be considered as statistical problems. This means that there is a tradeoff between statistical and computational efficiency. Quantum algorithms gain more computational efficiency (with faster computational speed) at the expense of less statistical efficiency (with larger variance) compared to classical algorithms.

Data science approaches can help to understand and optimise this tradeoff, and to identify general resources for achieving computational speedup in quantum algorithms. These resources can be physical materials, digital contents, or mathematical elements, and help to study their efficiency in achieving computational speedup.

Quantum is not absolute

According to Barry Sanders, a professor at the University of Calgary, the quantum computers we have now are small and noisy, and whether or not quantum computers are advantageous for data science might only be known through an empirical approach. This approach involves building quantum computers, testing algorithmic performance on such computers, and seeing whether an advantage has been found or not.

But, the big question mark around “if there is an advantage” doesn’t seem to deter organisations from placing their bet on quantum to be the future of computing. As per a Gartner report, 90% of organisations will partner with consulting companies or full-stack providers to accelerate quantum computing innovation through 2023.

Some early advantages are however found in quantum-adjacent computing, which uses knowledge gained from quantum algorithm development to create better algorithms today for non-quantum computers to solve problems. To be specific, the early-stage quantum-adjacent advantages are in discrete combinatorial optimisation.

Sanders cites an analysis in an article published earlier this year which identifies that quantum computing is commercially viable in combinatorial optimisation problems which essentially deals with finding optimal solutions from some finite objects. In this regard, the analysis specifies three business verticals which leverage quantum for their businesses: financial portfolio management, computing lowest-energy configurations with applications to material design and to pharmaceuticals, and predicting rare failures for advanced manufacturing.

The post Quantum Computing Propels Data Science into the Future appeared first on Analytics India Magazine.

10 Must-Know Math Concepts For Programmers

Programming is often thought to be a subject that doesn’t require much mathematical knowledge. However, while you don’t need to be a math expert to become a programmer, some mathematical concepts can greatly enhance your programming and problem-solving skills.

So here are 10 mathematical concepts every programmer should know of:

Numeral Systems

Numeral systems in programming are ways of representing numbers using different symbols and bases. The most common systems are decimal (base-10), binary (base-2), hexadecimal (base-16), and octal (base-8). Each system has its own set of symbols and rules for representing numbers. They are used for different purposes in programming, such as representing data, memory addresses, and byte values.

The possibilities are endless, and as a programmer, you have the power to choose which system to use depending on the needs of your project. Will you stick to the traditional decimal system, or will you explore new and creative ways to represent numbers? The choice is yours!

Linear Algebra

Linear algebra is a powerful mathematical tool used in programming to manipulate large sets of data efficiently. It helps programmers build complex algorithms for machine learning, computer graphics, and cryptography by using techniques like matrix operations, vector addition, and finding eigenvalues and eigenvectors. Linear algebra is like a set of building blocks that programmers can use to create advanced systems that can process and analyze data at scale.

Statistics

In programming, statistics is used in a variety of applications, from fraud detection to medical research. By using statistics to analyze and interpret data, programmers can make more informed decisions and create better systems. It’s like having a detective on your team who can help you solve complex problems and uncover hidden insights.

Boolean Algebra

Boolean algebra is a branch of mathematics that deals with logical operations on binary variables. In simpler terms, it’s a system of mathematics that helps us work with true and false values, represented as 1 and 0, respectively.

In Boolean algebra, there are three key operations: AND, OR, and NOT.

  • The AND operation is represented by a dot (.) and it takes two inputs. It outputs 1 only if both inputs are 1, otherwise, it outputs 0.
  • The OR operation is represented by a plus sign (+) and it also takes two inputs. It outputs 1 if either one or both inputs are 1, otherwise, it outputs 0.
  • The NOT operation is represented by a bar over a variable (¬ or ~) and it takes only one input. It outputs the opposite value of the input, i.e. if the input is 1, it outputs 0, and if the input is 0, it outputs 1.

Using these operations, we can create logical expressions that represent complex conditions. For example, the expression (A AND B) OR (NOT A AND C) means that we want to output 1 if both A and B are 1, or if A is 0 and C is 1.

Read: How is Boolean Algebra used in ML?

Floating Points

Floating points in programming are like scientific notation for computers. They allow for a wide range of real numbers to be represented using a base and an exponent, The base is a binary number that represents the significant digits of the number, and the exponent is an integer that represents the power of 2 to which the base is raised. Together, they create a floating-point representation of the number.

The representation is not always exact due to limited precision. They’re commonly used for calculations in science, engineering, and graphics, but require careful consideration of potential inaccuracies in code.

Logarithms

Logarithms are like special tools for solving problems involving exponential growth or decay. They help to transform large numbers into smaller, more manageable ones, making calculations more efficient.

For example, a computer program may need to calculate the result of a complex mathematical equation that involves very large numbers. By taking the logarithm of those numbers, the program can transform them into smaller values that are easier to work with. This can significantly reduce the processing time and memory requirements needed to complete the calculation.

Set Theory

Set Theory deals with sets, which are collections of distinct objects. In programming, set theory is used to solve problems that involve grouping or organizing data. A set can be defined as a collection of unique elements. These elements can be anything, such as numbers, strings, or even other sets.

In programming, set theory is used to solve problems such as searching for elements in a collection, comparing sets, and merging or splitting sets. It is often used in database management, data analysis, and machine learning.

Combinatorics

Combinatorics is a magic wand for counting and arranging objects. By using combinatorial techniques, programmers can solve problems related to probability, statistics, and optimization in a wide range of applications.

For example, combinatorics can be used to generate random numbers or to analyze patterns in large datasets.

Graph Theory

In programming, graph theory is used to solve problems such as finding the shortest path between two nodes in a network, detecting cycles or loops in a graph, and clustering nodes into communities. Graph theory is also used in artificial intelligence and machine learning, where it can be used to model decision trees and neural networks.

One of the key benefits of graph theory in programming is its ability to represent complex systems and relationships in a simple and intuitive way. By using graphs to model problems, programmers can analyze and optimise complex systems more efficiently, making graph theory an essential tool for several programming applications.

Read: Top resources to learn graph neural networks

Complexity Theory

Complexity theory is like having a GPS for programming. It helps you navigate the vast landscape of problems and algorithms, and find the most efficient path to your destination. One of the key benefits of complexity theory in programming is its ability to identify the most efficient algo for solving a problem.

The most famous problem in complexity theory is the “P vs NP” problem, which asks whether there are problems that are easy to verify but hard to solve. If such problems exist, then they are considered to be in the class “NP”, while problems that can be solved efficiently are in the class “P”.

The post 10 Must-Know Math Concepts For Programmers appeared first on Analytics India Magazine.