Tech Mahindra Partners with Microsoft to Enable Gen AI Powered Enterprise Search

Tech Mahindra announced an integration with Microsoft to enable Generative AI Powered Enterprise Search. The new offering, Generative AI Powered Enterprise Knowledge Search under Tech Mahindra’s TechM amplifAI0->∞ suite of AI offerings and solutions, will help enterprises increase effectiveness and personalisation by using generative AI to unlock the full potential of enterprise data and present a multi-modal, multi-channel search experience.

Tech Mahindra’s Generative AI-powered Enterprise Knowledge Search integrates Microsoft Azure OpenAI Service, Azure Cognitive Search, and Azure Language understanding to help enterprises unleash knowledge accessibility in a unique way, which will eventually improve the knowledge quotient within organizations.

This will bring multiple AI-led capabilities like content summarization, knowledge graph-led knowledge structuring, and a new kind of query interface.

“We are delivering an advanced enterprise search offering, which will unlock the full potential of data in enterprise environments with generative AI and offer a user-centric and efficient search, ensure tagging and indexing are correct, de-duplicate content, remove irrelevant content & maintain the repository, leverage advanced statistical analysis, Natural Language Processing (NLP), Large Language Model (LLM), etc.

“The offering provides for the manifestation of content transcending across different content types and multiple content sources,” Hasit Trivedi, CTO – Digital Technologies and Global Head – AI, Tech Mahindra, said.

The post Tech Mahindra Partners with Microsoft to Enable Gen AI Powered Enterprise Search appeared first on Analytics India Magazine.

Runway Bags $141M Series C Extension from Google, Nvidia, Salesforce

Runway Bags $141M Series C Extension from Google, Nvidia, Salesforce June 30, 2023 by Jaime Hampton

(cybermagician/Shutterstock)

Generative AI investment capital continues to flow this week. Runway, a company specializing in AI-powered content creation tools, has announced a $141 million Series C extension from Google, Nvidia, Salesforce Ventures, and others.

This extended round builds on a Series C announced in December. Reuters reported the company is now valued at $1.5 billion, according to a source familiar with the matter. The company says it will leverage the fresh funds to further scale its in-house R&D, expand its team, and continue to bring its multi-modal AI systems and products to market.

Runway offers a suite of AI-powered tools that it says are used by TV shows, media companies, and creatives across industries, and even an Academy-nominated movie. The company says its research team is involved in new developments in deep learning to ensure the future of content creation is accessible, controllable, and empowering for artists and creatives.

“We’re rebuilding the creative stack from the ground up and developing impactful research that will be a major force of change across industries,” said Runway CEO and co-founder, Cristóbal Valenzuela. “The next phase of storytelling will be highly democratized, and our ultimate goal is to create a more equitable, diverse, and creative world through our products and research outputs. We are thrilled to have continued support from investors, partners, and customers who believe in this vision.”

Runway co-founders Anastasis Germanidis (top), Alejandro Matamala-Ortiz, and Cristóbal Valenzuela (L to R). (Source: Runway)

Runway’s flagship product is Gen-2, the second iteration of its text-to-video model that allows users to generate video content from an input video and either images or text prompts. Earlier this year, the company launched a mobile app and hosted a series of AI Film Festivals. It also created Runway Studios, an entertainment and production division of the company that serves as a production partner and platform for creatives.

Nvidia is an investor and cloud computing provider for the company. “Generative AI is transforming the content creation industry, breathing new life into stories and ideas that were not imaginable,” said Jensen Huang, founder and CEO of Nvidia. “The Runway team is doing amazing work with Nvidia accelerated computing in the cloud to push the boundaries of creativity and storytelling for millions of artists globally.”

“Runway has been pushing the boundaries of creativity with artificial intelligence for the last five years, and we are excited to continue innovating on a new era of creative tools for artists and creators everywhere,” Valenzuela wrote in a blog post. “In 2023, we brought to market Gen-1 and Gen-2, the first of their kind video generation models. Today, our entire suite of AI Magic Tools is being used by Fortune 500 and Global 2000 companies, helping tell new types of stories, and streamline workflows.”

Related

Generative AI may help make ‘low-code’ more ‘no-code’ — but with important caveats

developers coding

Are generative AI and no-code development becoming synonymous?

It looks that way. Both provide ways to quickly generate code specifying certain routines. But there are distinct differences as well — namely, generative AI assists professional developers, while no and low-code is more targeted at non-developers. Non-developers likely won't be ready to fuss with AI-generated code any time soon.

Also: Who owns the code? If ChatGPT's AI helps write your app, does it still belong to you?

A recent survey of 2,000 IT executives released by Microsoft finds 87% of CIOs and IT pros say increased AI and automation embedded into low-code platforms would help them better use the full set of capabilities. This is "a trend we are seeing across low-code tools," remarks Richard Riley, general manager for Microsoft's Power Platform.

"Generative AI certainly appears to be another way for code to be automatically generated," says Dr. James Fairweather, chief innovation officer of Pitney Bowes. "It's showing the potential to be a great aid in bridging the gap between the intent of a person and the computer programming required to solve a task."

However, software development is a much more complex experience than simply pumping out code, Fairweather adds. "The generative capabilities we are seeing in language and image models are a small subset of the topics that will need to be modeled for generative AI to take a larger role in automated software development," he points out. "Every software system has additional considerations — like logical and physical system architecture, data modeling, build and deployment engineering, and maintenance and management activity — that still appear to be well beyond current generative AI capabilities."

Also: 92% of programmers are using AI tools, says GitHub developer survey

AI will ultimately serve "as a way to enable low-code and no-code environments," says Leon Kallikkadan, vice president of technology at Atrium. "I also think that as other partnerships can come onboard it will make low-code and no-code more of a possibility. I believe it will be a phased approach whereby as you, the human developer builds it, an AI component will start creating a vision or future step. The long-term possibilities depend on how deep the integration is, but yes, it can go that far to become a low-code, no-code environment."

No and low-code solutions may be a good fit for non-technical users. "Low code is more geared towards non-coders," says Jesse Reiss, CTO of Hummingbird. "It provides organizations with the ability to reimagine business processes without obtaining steep IT expertise. This is crucial for small- to medium-sized businesses, especially during the ongoing labor challenge where they can be short-staffed or do not have the resources to support business operations."

Also: How to use ChatGPT to write code

Generative AI is more suitable for development work requiring high-level expertise, experts state. "For building apps, I don't think it is as much about low- or no-code environments as we currently imagine them," says Louis Landry, engineering fellow with Teradata. "Building things always requires code. Rather, it's about simplifying and speeding up the coding process for the programmer."

Generative AI serves to "rapidly provide code that supports existing systems or infrastructure," says Reiss. "What I'm seeing now is that the businesses that are able to leverage generative AI most effectively are businesses that have the underlying framework or infrastructure to support the use case. They are able to make their operations faster, easier, and simpler or are able to incorporate AI into existing product lines."

Also: I'm using ChatGPT to help me fix code faster, but at what cost?

Still, generative AI may help make low-code more no-code. "One of the most significant benefits of generative AI is its ability to bridge the gap between low-code and no-code environments," says Oshri Moyal, cofounder and CTO at Atera. "By providing pre-built models and code templates, generative AI allows developers to create sophisticated applications without requiring extensive coding skills. This democratizes the development process and opens up opportunities for a broader range of individuals to participate in building technology solutions."

More on AI tools

Microsoft Announces New AI Skills Initiative and Grant

Microsoft has announced a new AI Skills Initiative to help people and communities around the world learn to harness the power of AI.

The new initiative, part of Microsoft’s Skills for Jobs programme, includes new, free coursework developed with LinkedIn; a new grant challenge with data.org for organisations to create new ways of training, upskilling and reskilling workers in generative AI.

“AI skills represent the third-highest priority for companies’ training strategies, alongside analytical and creative thinking. AI has tremendous potential to empower workers, however, we need to ensure that everyone has the skills to use it. The new AI Skills Initiative marks a new beginning that will build on a new wave of technology innovation to come,” Gunjan Patel, director and head – Philanthropies, Microsoft India, said.

Microsoft, data.org, Microsoft’s AI for Good Lab, and GitHub are launching an open grant program to explore, develop, and implement how nonprofit, social enterprise, and research or academic institutions can train and empower the workforce to use generative AI.

This global grant will support organisations driving skilling and economic growth especially those focusing on fair and community-led implementations of generative AI with historically marginalised populations around the world.

In addition to financial support, the awardees will receive access to a cohort experience, Microsoft events, Azure-based cloud computing resources as well as data training and technical guidance from Microsoft and GitHub experts.

The post Microsoft Announces New AI Skills Initiative and Grant appeared first on Analytics India Magazine.

Data assets in the AI era

Data assets in the AI era
Skyscrapers Horizon Urban Skyline by Camera-man on Pixabay

Here’s a hypothesis: Smart data (enriched with metadata that makes it connectable with other data, Tinker Toy style) might be fungible in ways that dumb data isn’t.

For exchange and reuse purposes, can data be “fungible” at all? Consider this excerpt from an explainer on the Cointelegraph.com site:

“Fungible tokens or assets are divisible and non-unique. For instance, fiat currencies like the dollar are fungible: A $1 bill in New York City has the same value as a $1 bill in Miami. A fungible token can also be a cryptocurrency like Bitcoin: 1 BTC is worth 1 BTC, no matter where it is issued.

“Non-fungible assets, on the other hand, are unique and non-divisible. They should be considered as a type of deed or title of ownership of a unique, non-replicable item. For example, a flight ticket is non fungible because there cannot be another of the same kind due to its specific data. A house, a boat or a car are non-fungible physical assets because they are one-of-a-kind.”

Part of the problem with this definition of fungibility is its tight association with currency. Imagine you have a flow of data. Unique, fungible assets could conceivably be part of that data stream.

In other words, exchangeable, unique assets could be blended in with other non-unique data, in the same data stream. Why would you even want to do this kind of blending? Maybe you want to encourage consumption of that data stream. Say an advertiser wants to display an ad, or a pitchman deliver a pitch. Maybe the consumer would be promised an NFT in exchange for listening to the pitch and providing some feedback.

Data assets and behavior

At a minimum, data, unless it’s just exhaust that’s not reused, is clearly an asset.

Art Kleiner and Juliette Powell have just finished a book called the AI Dilemma that’s slated for release in August 2023. It’s a book written for business executives.

I gave an interview for the book, and we’ve had some email exchanges during the course of the book writing and production process on AI and data trends. Most recently, Art, whom I know from his days as the Editor-in-Chief at Strategy & Business, made these observations:

I keep thinking of Juliette’s line “you are what you reveal.” That increasingly translates to: “anything you do is data about you.”

If I’m correct in my interpretation (which I may not be), you’re saying that a data-centric architecture would imply, “all data are assets, and need to be tagged as assets. You automatically own what you reveal.” Whether you monetize it or not, in a world that recognizes data as assets, the data is linked back to you.

This would start companies managing their data as the decision point (the new unified data-first IT replacing the old fragmented algorithm-first IT), but it wouldn’t stop at corporate boundaries. As data moved along supply chains and through transactions, data holders would feel compelled to adopt the standard, to become interoperable.

Art and Juliette’s observations made me realize that businesspeople can guide technologists in more ways than they know, particularly when it comes to data.

Data as an organic, growing, and flowing resource

To use a materials metaphor, “data” is silica to many data scientists. It’s inert and inorganic. You have to add metal and heat to make electrically active silicon wafers out of it. Otherwise, it’s just sand, and not a renewable resource in any case.

What more intelligence in the data makes possible is the organic, dynamic, interactive representation of the living world.

This representation is what Wired Co-Founder Kevin Kelly calls the Mirrorworld. Twitter, LinkedIn and other social networks have been a starting point for that sort of interactive, growing, farming and harvesting environment.

The Mirrorworld is a misnomer in some ways. Data doesn’t just mirror what it represents. It serves as a lifeblood for the digital environment.

These folks are wrapping the world in an evolving, multi-use operating and resource sharing environment using web principles. Data is the evolving resource that informs this operating and resource sharing environment.

Independent knowledge graph consultant Roy Roebuck describes the data evolution challenge that enterprises face as one with multiple layers. “Data with added context or meta provides information. Each layer of this consciousness progression is built by assigning context to the previous layer. Beyond the knowledge layer is where humans, and potentially A.I. constructs, would build their consciousness.”

Building blocks of recorded intelligence

Data assets in the AI era

Instead of thinking in terms of how to automate business functions by handing them off to machines and third parties, it’s best to think in terms of a pervasive, machine assisted human-in–the-loop scenario.

The question becomes, how can we empower humans and take advantage of machine-based effectiveness and efficiency at the same time?

Fleshing out the data asset

Data can express its own utility, contextual relevance and parameters monetization and exchange, as well as other attributes. Agents and humans together can be the actors who nurture the data and put it to use.

The intelligence, relevance and interoperation details can reside in the data, with the functions enabled by agents brokering interactions between machine and human actors, after all. The ideal situation from an efficiency perspective is to create intelligent data executing agents designed to live and thrive in the web environment, harnessing the functionality of the larger environment.

The web in its first and third incarnations (a.k.a., the “webby way” David Weinberger, et al. described in 2001’s The Cluetrain Manifesto, and so called “web3” or the decentralized web of today—see One Big Graph and the Interorganization (https://www.datasciencecentral.com/one-big-graph-and-the-interorganization/) are examples of how collaboration can be less siloed.

Design and create once, use everywhere helps all participants in the ecosystem. So why not have one ecosystem, or one virtualized model of what exists so that we can quantify and manage each of its component parts for efficiency, effectiveness and net impact on the living world?

Build AI Chatbot in 5 Minutes with Hugging Face and Gradio

Build AI Chatbot in 5 Minutes with Hugging Face and Gradio
Image by Author

This short tutorial will build a simple chatbot using the Microsoft DialoGPT model, Hugging Face Space, and Gradio interference. You will be able to develop and customize your own app in 5 minutes using a similar technique.

1. Create a New Space

  1. Go to hf.co and create a free account. After that, click on your display image on top right and select “New Space” option.
  2. Fill out the form with App name, Licence, Space hardware, and visibility.

Build AI Chatbot in 5 Minutes with Hugging Face and Gradio
Image from Space

  1. Press “Create Space” to initialize the application.
  2. You can clone the repository and push the files from your local system or create and edit files on Hugging Face using the browser.

Build AI Chatbot in 5 Minutes with Hugging Face and Gradio
Image from AI ChatBot 2. Create ChatBot App File

We will click on the “Files” tab > + Add file > Create a new file.

Build AI Chatbot in 5 Minutes with Hugging Face and Gradio
Image from kingabzpro/AI-ChatBot

Create a Gradio interface. You can copy my code.

Build AI Chatbot in 5 Minutes with Hugging Face and Gradio
Image from app.py

I have loaded the "microsoft/DialoGPT-large" tokenizer and model and created a `predict` function for getting the response and creating the history.

from transformers import AutoModelForCausalLM, AutoTokenizer  import gradio as gr  import torch      title = "🤖AI ChatBot"  description = "A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)"  examples = [["How are you?"]]      tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large")  model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-large")      def predict(input, history=[]):      # tokenize the new input sentence      new_user_input_ids = tokenizer.encode(          input + tokenizer.eos_token, return_tensors="pt"      )        # append the new user input tokens to the chat history      bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1)        # generate a response      history = model.generate(          bot_input_ids, max_length=4000, pad_token_id=tokenizer.eos_token_id      ).tolist()        # convert the tokens to text, and then split the responses into lines      response = tokenizer.decode(history[0]).split("<|endoftext|>")      # print('decoded_response-->>'+str(response))      response = [          (response[i], response[i + 1]) for i in range(0, len(response) - 1, 2)      ]  # convert to tuples of list      # print('response-->>'+str(response))      return response, history      gr.Interface(      fn=predict,      title=title,      description=description,      examples=examples,      inputs=["text", "state"],      outputs=["chatbot", "state"],      theme="finlaymacklon/boxy_violet",  ).launch()

Moreover, I have provided my app with a customized theme: boxy_violet. You can browse Gradio Theme Gallery to select the theme according to your taste.

3. Create a Requirement File

Now, we need to create a `requirement.txt` file and add the required Python packages.

Build AI Chatbot in 5 Minutes with Hugging Face and Gradio
Image from requirements.txt

transformers  torch

After that, your app will start building, and within a few minutes, it will download the model and load the model inference.

Build AI Chatbot in 5 Minutes with Hugging Face and Gradio
4. Gradio Demo

The Gradio App looks awesome. We just have to create a `predict` function for every different model architect to get responses and maintain history.

You can now chat and interact with an app on kingabzpro/AI-ChatBot or embed your app on your website using https://kingabzpro-ai-chatbot.hf.space.

Build AI Chatbot in 5 Minutes with Hugging Face and Gradio
Image from kingabzpro/AI-ChatBot

Are you still confused? Look for hundreds of chatbot apps on Spaces to get inspiration and understand the model inference.

For example, if you have a mode that is finetuned on “LLaMA-7B”. Search for the model and scroll down to see various implementations of the model.

Build AI Chatbot in 5 Minutes with Hugging Face and Gradio
Image from decapoda-research/llama-7b-hf Conclusion

In conclusion, this blog provides a quick and easy tutorial on creating an AI chatbot using Hugging Face and Gradio in just 5 minutes. With step-by-step instructions and customizable options, anyone can easily create their chatbot.

It was fun, and I hope you have learned something. Please share your Gradio demo in the comment section. If you are looking for an even simpler solution, check out OpenChat: The Free & Simple Platform for Building Custom Chatbots in Minutes.
Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master's degree in Technology Management and a bachelor's degree in Telecommunication Engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.

More On This Topic

  • Hugging Face Transformers Package — What Is It and How To Use It
  • Understanding BERT with Hugging Face
  • Overview of AutoNLP from Hugging Face with Example Project
  • Training BPE, WordPiece, and Unigram Tokenizers from Scratch using Hugging…
  • Top 10 Machine Learning Demos: Hugging Face Spaces Edition
  • A community developing a Hugging Face for customer data modeling

Behind Indian IT’s Mixed Emotions for LGBTQ+

In 2018, India’s Supreme Court struck down Section 377 of the Indian Penal Code, decriminalising homosexuality — a major victory for LGBTQ+ rights in the country. Since then, the tech industry has made several attempts to make it more inclusive for them.

Indian IT Rewrites the Inclusivity Code

Interestingly, TCS was one of the first Indian IT companies to tweak their insurance policy to include same-sex partners and redefine the policy with the use of ‘partner’ instead of ‘husband/wife’, broadening the scope. Tech Mahindra recently organised a pride walk at their Bengaluru campus to celebrate the Pride month and show support and solidarity to the LGBTQIA+ community with participation from its employees, allies, and community members. Tech Mahindra also supports the community through initiatives such as 12 weeks of paid same-sex adoption leave, bereavement leaves for same-sex partners, support groups, diversity and inclusion training, and visible LGBTQ+ leadership.

Similarly, Infosys’ initiatives include an enhanced Health Insurance Plan covering partners and gender confirmation surgeries, a learning program called #AllyForChange, employee resource group called IPRIDE, focused on creating a safe and inclusive workplace and promoting education on sexual orientation and gender identity, and more.

Besides health insurance, one of Wipro’s key initiatives in this space is ‘Open Hearts Open Minds’, using real-life stories to raise awareness about appropriate language and behavior towards LGBTQ+ colleagues. They have implemented measures such as a Gender Affirmation Policy, Gender Transition Guidelines, and a Gender Neutral Anti-Sexual Harassment policy to create a welcoming workplace for queer employees. HCL Tech, the other Indian IT giant promotes LGBTQ+ inclusion through initiatives like Pride@HCLTech ERG, support for Out & Equal Workplace Summit, non-discrimination policy, LGBTQ+ training for employees, and partnerships with organisations like Human Rights Campaign, Trevor Project, and GLAAD.

Unravelling the Ground Reality

Over the span of the past two months, AIM reached out to numerous employees within renowned tech companies (not just IT) in India who identify as queer individuals. The purpose of this initiative was to gain firsthand insights into the ground reality and understand the experiences of LGBTQ+ members within the country’s tech giants.

The identities of both corporate entities and individuals have been anonymised, to protect privacy.

Initially, AIM faced a notable hurdle as individuals were reluctant to openly discuss the issue, citing company policies.

Despite these constraints, a common thread emerged from all the respondents: while management had implemented initiatives with good intentions, a pervasive problem of homophobic mindsets persisted among co-workers.

The crux of the issue appeared to stem from the prevalence of petty homophobic jokes that is always there. These derogatory comments along with microaggressions, may not seem like a big deal at first, but the impact of these ‘harmless’ jokes goes beyond the moment, contributing to a broader culture of discrimination and intolerance within these tech companies.

Never-ending Peril for Sexual Minorities

TW: Rape, Violence

Living a life that is marred by relentless bullying and harassment is far from easy. Inspite of 4 years of decriminalisation of homosexuality, India is still not safe for the sexual minority.
Two youths gang-raped a 19-year-old transgender person in January in Bhiwandi. A woman who identified as lesbian was forced to go through conversion therapy and marry a man until the court had to step in to provide protection. Last year, a transgender woman was forcefully taken from a government shelter in Noida and assaulted by the police. A transgender woman, Udhaya, was attacked in Tamil Nadu’s Tirunelveli district by her partner’s family. The list of violence against the LGBTQIA+ community is endless.

Today is the last day of June, the month of ‘Pride’, which brings with it vibrant expressions of support and inclusion. Instagram has rolled out new rainbow features. On Twitter, a simple ‘like’ transforms into a burst of rainbow colours when you engage with queer-related tweets. During this time, numerous companies alter their logos to stand in solidarity with the LGBTQ+ community. Although these efforts are made with a positive mindset, for most, it is reduced to a marketing gimmick.

It is crucial that we take a more comprehensive approach to address the underlying issues. Merely implementing policy changes and making token gestures is insufficient to combat the deep-rooted biases that still exist within our workforce.

The least we can do is be mindful and respectful of someone’s preferences. Your ‘harmless jokes’ on someone’s whole identity are actually not harmless.

Read more: Indian Tech is Still Oblivious to Gender Inclusivity

The post Behind Indian IT’s Mixed Emotions for LGBTQ+ appeared first on Analytics India Magazine.

Will ChatGPT Replace Data Scientists?

Will ChatGPT Replace Data Scientists?
Image by Author

If you are working in the data industry or aspire to do so, you might be wondering if it’s time for a career change.

Will generative models like ChatGPT be the end of data scientists?

As someone who has worked in data science for three years, I’d like to provide my take on this.

In an article I wrote some time back, I strongly disagreed with the notion that automated AI software could ever replace data scientists. My argument was that these tools would improve organizational efficiency to some extent, but lacked customizability and required human involvement at every stage.

But that was back in February 2022, way before ChatGPT, OpenAI’s revolutionary language model, was released.

When ChatGPT was first made public, it was based on GPT-3.5, a model capable of understanding natural language and code.

Then, in March 2023, GPT-4 was released. This algorithm outperforms its predecessor in solving problems based on logic, creativity, and reasoning.

Here are some facts about GPT-4:

  • It can write code (like, really well)
  • It passed the bar exam
  • It outperformed most state-of-the-art models on machine learning benchmarks

This model can turn a sketch into a fully-fledged website and acts as a great assistant to programming and data science tasks.

And it is already being used by organizations to improve efficiency.

The CEO of Freshworks, Girish Mathrubootham, says that programming tasks that once took his employees 9 weeks to complete are now being done in a few days with ChatGPT.

With generative AI, coding workflows in this company are being completed approximately 20 times faster than usual. This will lead to a massive decrease in turnaround time, which means that companies can get more done faster.

The Bad — Why Your Job Is At Risk

Product Integrations

So far, we’ve just talked about programming.

There are other aspects to a data scientist’s job — such as data preparation, analysis, visualization, and model building.

In my experience, data scientists are currently highly in demand because of the diverse variety of skills they are expected to have.

Apart from building statistical models and learning to code, these professionals also need to use SQL for data extraction, work with software like Tableau and PowerBI for visualization, and effectively communicate insights to stakeholders.

With LLMs like ChatGPT, however, the barrier to getting into a field like data science or analytics will reduce tremendously. Candidates no longer need to possess expertise in various software, and can instead harness the power of LLMs to accomplish in minutes what would typically take hours.

For example, in a company I once worked with, I was asked to complete a timed Excel assessment since a majority of the organization’s database resided in spreadsheets. They wanted to hire someone who was able to quickly extract and analyze this data.

This requirement to hire candidates with expertise in using specific tools, however, will disappear as LLM adoption increases.

For instance, with a ChatGPT-Excel integration, you could simply highlight cells you want to analyze, and ask LLMs questions such as “What is the trend of these sales numbers over the last quarter,” or “Can you perform regression analysis?”

Will ChatGPT Replace Data Scientists?
ChatGPTs response to what an Excel integration would look like

Product integrations like this will make Excel and other similar software accessible to people who don’t typically use them, and the demand for experts in the tool will reduce.

Code Plugins

The ChatGPT code interpreter plugin is another example of how data science workflows are becoming democratized. It allows you to run Python code and analyze data in the chat.

Will ChatGPT Replace Data Scientists?
Image by “The Latest Now” on Medium

You can upload CSV files and get ChatGPT to help you clean, analyze, and build statistical models on them.

Once you analyze the data and tell it what you want to do (for instance, forecast sales numbers for the next quarter), ChatGPT will tell you the steps you can take to achieve the final outcome.

It will then proceed to do the actual analysis and modeling for you, and explain the output at each stage of the process.

In this article, the author asks ChatGPT’s code interpreter to predict future inflationary trends using the Federal Reserve Economic Data (FRED). The algorithm started by visualizing the current trend in the data.

It then checked the data for stationarity, transformed it, and decided to use ARIMA to perform the modeling. It was even able to find the optimal parameters to use to generate forecasts with ARIMA:

Will ChatGPT Replace Data Scientists?
Image by “The Latest Now” on Medium

These are steps that would typically take a data scientist around 3-4 hours to perform, and ChatGPT was able to do it in minutes by simply ingesting the data that was uploaded by the user.

This is an impressive feat, and will dramatically reduce the amount of expertise required to facilitate the model-building process.

So…Is Human Expertise Still Required?

Of course, regardless of how good AI gets at coding and model building, human experts are still required to oversee the process.

ChatGPT often generates incorrect code and makes wrong decisions when building statistical models. Companies still need to hire employees who are good at statistics and programming to oversee the data science process, to ensure that the model is prompted correctly.

LLMs cannot create full-fledged data products, as humans still need to perform tasks like requirement gathering, debugging, and validating the model’s output.

However, companies will not need as many people to perform these tasks as they did before.

Significant efficiency gains like the ones driven by LLMs would mean that teams can start downsizing.

Instead of getting 10 data scientists to do the job, for instance, companies can simply hire 5.

I believe that entry-level data science jobs will be the first ones to get impacted by this development since LLMs can already perform intermediate-level coding and analytical workflows.

Hiring freezes due to AI is already taking place in big tech, and we might be witnessing a scenario in which the data science workforce surpasses the demand for this skill.

How to AI-Proof Your Career in the Age of ChatGPT

Fortunately, it’s not all doom and gloom for us tech and data science professionals. Although LLMs are rapidly improving at tasks like programming and data analysis, they cannot replace human creativity and decision-making.

Here are some ways to AI-proof your career in the age of LLMs:

Gain Business Expertise

Organizations will continue to hire people who generate revenue for the business.

If you have domain expertise in a specific area and understand the intricacies of the company’s operations and customer needs, you are in a unique position to identify opportunities for growth.

The last thing you want to do is to be in competition in AI — you don’t want to be the guy managing a spreadsheet, or the person everyone approaches to create a quarterly performance report. These jobs can easily be automated and will be the first to go in the ChatGPT age.

I would argue that instead of focusing your effort on learning to use specific software that LLMs can master a lot faster than you can, learn to look at the bigger picture. Develop leadership and managerial skills, and understand how AI can be leveraged to achieve the company’s goals with data.

Embrace AI

According to Pew Research Center, only 14% of adults have actually tried ChatGPT. If you are reading this article, using ChatGPT to learn new things, and staying on top of AI advancements, then you are an early adopter.

I suggest incorporating LLMs into your workflows, using products that are integrated with AI, and learning best practices for maximizing efficiency with these models.

This way, you can stay ahead of the curve, and will better understand which parts of your job can be automated, and which ones require human intervention.

Not only will this make you a better data scientist, but when organizations do start incorporating AI into different business areas, you will be in the best position to advise on how it can be used to increase productivity.

In fact, there’s a new role called prompt engineering that has emerged recently, commanding salaries of up to $335,000. A prompt engineer is an expert at getting generative AI applications to do what they want.

A good prompt engineer is someone who can “project manage” AI into accomplishing tasks like designing web applications.

Regardless of whether you’d like to pursue a job as a prompt engineer, incorporating AI into your existing workflows will give you a competitive edge over people who aren’t currently doing so.

Diversify Your Income

Organizations are going to start restructuring soon, as they start developing new business strategies that incorporate AI.

If this results in mass layoffs, the only way to protect yourself is to have various streams of income that do not rely solely on your full-time job.

I suggest creating a freelance portfolio — working for more than one organization and getting passive income will ensure that your future isn’t dependent on the decisions made by a single employer.

Creating a Personal Brand

Finally, Harvard Business Review suggests creating a personal brand to set yourself apart from the crowd.

Medium writers like Tim Denning and Jessica Wildfire, for example, will still have a devoted base of followers and people who consume their products, even if AI is able to emulate their writing style.

This is because at the end of the day, humans enjoy real stories and want to feel connected to other individuals, and this is something that AI simply cannot provide.

Similarly, organizations will continue to hire industry leaders who are recognized in the field, as a statement of quality and branding. Some ways to build a personal brand include building a data science portfolio, creating content, and constantly upskilling.

Takeaways

Generative models are going to transform the job landscape, and fields like data science, analytics, and programming will be impacted due to the efficiency gains provided by these tools.

However, this doesn’t spell the end for data scientists. Following the strategies outlined above can help you stay ahead of the curve and ensure that you aren’t in competition with AI.
Natassha Selvaraj is a self-taught data scientist with a passion for writing. You can connect with her on LinkedIn.

More On This Topic

  • DataLang: A New Programming Language for Data Scientists… Created by…
  • 20 Questions (with Answers) to Detect Fake Data Scientists: ChatGPT…
  • 20 Questions (with Answers) to Detect Fake Data Scientists: ChatGPT…
  • AI is Not Here to Replace Us
  • Will DeepMind’s AlphaCode Replace Programmers?
  • What will the demand for Data Scientists be in 10 years? Will Data…

NVIDIA Sees GPUs Everywhere

NVIDIA is Funding Itself

Jensen Huang, the head of NVIDIA, said in an interview with CNBC, “If you want to start a startup today, it’s you and AI. And you’re supercharged by the AI supercomputer.” NVIDIA sees AI and startups in everyone. In turn, for everyone else, NVIDIA and AI are increasingly getting synonymous. So much so, that as soon as we see a new AI startup or hear about a company raising funds, we are sure to stumble upon NVIDIA, either as a backer or a GPU provider!

Companies buying NVIDIA GPUs through VC funds is one thing, but what if NVIDIA funds startups so that they can buy NVIDIA GPUs? Funnily enough, this is what is happening now.

Inflection AI, a year-old generative AI startup, just raised $1.3 billion in funding from Microsoft and NVIDIA. And guess what? The company is now building the world’s largest AI cluster consisting of 22,000 H100 GPUs from NVIDIA. For comparison, OpenAI’s ChatGPT was reportedly trained on 10,000 A100s, which are actually a predecessor to NVIDIA’s H100s.

22,000 H100s is all you need. https://t.co/keoqEhT4Yx

— Bojan Tunguz (@tunguz) June 29, 2023

At the same time, NVIDIA, along with Google and Salesforce, also announced a funding of $141 million for RunwayML, another generative AI startup. Earlier this month, NVIDIA also invested in another Google-backed OpenAI rival, Cohere. The $270 million funding includes Oracle and Salesforce as well. For this, Oracle said that Cohere will offer its AI services on Oracle’s data centres, which run on 16,000 NVIDIA chips. Even CoreWeave has been running on NVIDIA GPUs.

The list of NVIDIA-backed companies goes on and on. The company that was minting billions of dollars is now worth trillions. It all started with NVIDIA announcing its Inception AI startup program. According to PitchBook, by 2021, NVIDIA represented two-thirds of the total AI startups in the world.

NVIDIA wants a piece of every startup. Synthesia, the video generation platform, gained unicorn status in June by raising funds from NVIDIA. Obviously, most of that amount would be spent on buying NVIDIA GPUs. But interestingly, after the funding, Synthesia’s founder said, “While we weren’t actively looking for new investment, NVIDIA shares our vision for transforming traditional video production into a digital workflow.”

GPU’s are the real heroes in AI and NVIDIA knows it

To put it in simple words, NVIDIA’s strategy is – “We give you money, you buy our GPUs, and increase your valuation”. A common joke doing the rounds is that “technology companies may be hiring more GPUs than employees”. For the companies, NVIDIA’s backing and branding have been boding well and helping them make their millions.

This strategy has been working in favour of NVIDIA when it comes to competing with rivals like AMD or Intel. NVIDIA holds 88% of GPUs market share in the world, leaving 12% with AMD and Intel. There is no doubt that the company is the biggest benefactor of the generative AI race.

Interestingly, the company doesn’t have to care about any rivalry between Microsoft, Google, Meta, OpenAI, or anyone else, since every single company is using NVIDIA GPUs to run their models. Companies having the largest cluster of NVIDIA GPUs even boast about how many GPUs they use. It’s almost akin to boasting about how much money and valuation one has since one NVIDIA GPU for building AI models costs $10,000.

Elon Musk, the head of Tesla and Twitter, also emphasised the importance of GPUs in April. He prophesied that everyone would be lining up to buy GPUs, which is clearly the case now. Musk is also expected to soon enter space with its Dojo Supercomputer. Though for this, Musk has said that he would not sell the GPUs but just provide Tesla’s supercomputer as a service, not competing with NVIDIA.

NVIDIA has a chance to pull a prank on everyone

It does seem like Huang took Warren Buffet’s famous quote, “The best investment you can make is an investment in yourself,” pretty seriously. And now, he has a chance to pull the biggest prank on everyone — instead of giving more money to startups to buy its GPUs, the company can call back all its GPUs and build its own model. Or maybe just give away its GPUs for free. You can do it NVIDIA!

The post NVIDIA Sees GPUs Everywhere appeared first on Analytics India Magazine.

Travel Industry has No Choice but to Embrace Generative AI

Three years ago, the travel industry took a massive blow by Covid, with recovery seeming nearly impossible. It was estimated that nearly 40% of companies would have to shut down operations. But now there is a glimmer of hope thanks to generative AI. Now, companies are looking to automate and deploy digital agents that not only save resources but also help recover from past losses. As per a recent market study, generative AI in travel is expected to be worth $2.9 billion in the next ten years growing at 18.2%.

Brian Chesky, co-founder of Airbnb, during the company’s latest earnings call, said that generative AI can be used as an entry point to customer service. “AI is going to be able to give us better service, cheaper and faster, by augmenting the agents (catering to customer care). I think this is going to be a huge transformation.”

In fact, one of the first use cases of ChatGPT was users generating travel itineraries. From giving elaborate travel plans to ticket and hotel recommendations, the chatbot was able to execute every task. With the introduction of ChatGPT plugins, travel booking platforms Expedia and Kayak (a subsidiary of Booking Holdings) were also integrated, which allows users to connect to real-time data and book tickets in real-time.

One of the biggest use cases for companies is when it comes to managing customer volumes. Akin to how organisations implement generative AI in enterprises for a number of reasons, including improving productivity and serving customers better, travel companies are no different. By integrating AI for automating queries and customer requests, travel companies are enabling supportive functions through generative AI. Companies are also integrating different languages to cater to specific market segments. For instance, MakeMyTrip has integrated the Hindi language for speech-to-text models.

Narasimha Medeme, VP head of data science at MakeMyTrip, told AIM, that the company’s conversational bot uses a combination of generative AI LLM models plus speech-to-text models for Bharat customers in English and Hindi. “Multiple other use cases and bots are being tested in the beta stage for Bharat customers.”

Traditional Players?

Before chatbots ruled the space, the traditional method of planning a travel itinerary involved a travel agent or Google. With the vast information that can send you down a rabbit hole, Google was the go-to source for travel ideas, places to visit, flight plans, etc. However, the extensive information from multiple sources will not only confuse one, but also expose one to information heavily based on SEO. Thus, leading to inconclusive results.

Traditional travel agent companies have also been facing the brunt. With generative AI slowly making its way into all booking and planning portals, where do traditional travel agents stand? Thomas Cook, the oldest travel agency in the world, has been facing its share of losses from the time online booking platforms came in. Owing to a late adoption strategy and insurmountable debts, the parent company had to wind up operations in 2019.

Thomas Cook India, a separate entity, is still functioning and posted a 150% YoY growth for Q4FY23. The company announced its AI-powered chatbot, TeeCee in 2019, to enhance customer experience. However, the company has not announced any other form of generative AI integration on their platform ever since. This is a step that may prove detrimental considering how other booking portals are moving fast with AI adoption.

The Indispensable AI

A number of travel companies are integrating generative AI on their platforms. A subsidiary of Booking Holdings, online travel agency booking.com recently unveiled an AI travel planner powered by ChatGPT. The integration is said to offer an interactive and conversational experience for trip planning, and its beta version is available in the US only.

In May, travel platform MakeMyTrip announced a collaboration with Microsoft by introducing voice-assisted booking in Indian languages using Microsoft Azure OpenAI and Azure Cognitive Service. The service aims to help users with personalised travel recommendations, holiday packages, and booking processes.

AirIndia, too, recently announced that it’s looking to add ChatGPT as part of modernising its systems. The company has invested around $200 million in its technology initiatives.

While ChatGPT plugins and AI integration in travel portals helps a user plan and address their travel needs, the move has injected life back into the travel sector that faced the biggest blow owing to the pandemic. It is evident that travel companies are left with no choice — they can either embrace generative AI and thrive or eventually succumb to the competition.

The post Travel Industry has No Choice but to Embrace Generative AI appeared first on Analytics India Magazine.