OpenAI is Helping Farmers in India Increase Crop Yields

OpenAI said that AI tools like ChatGPT today are being used around the world to help farmers in India and Kenya increase crop yields in partnership with Digital Green.

Founded initially as a project within Microsoft Research India’s Technology for Emerging Markets in 2006 by Rikin Gandhi and his colleagues, Digital Green became an independent NGO in 2008.

The organisation focuses on training farmers to produce and share short videos that document their challenges, solutions, and success stories, aiming to facilitate a technology-enabled means of behavior change communication.

Digital Green recently introduced Farmer.chat using GPT-4, covering a wide range of agricultural topics including crop advice, disease identification, weather forecasts, and market information.

Similarly, Jugalbandi, is another AI chatbot powered by OpenAI’s GPT models through Microsoft’s Azure OpenAI Service, which assists farmers and villagers in rural India to access information about various government schemes beneficial to them.

The chatbot, accessible via WhatsApp, retrieves relevant program details typically documented in English and delivers them in the user’s native language from among 10 of the 22 official Indian languages

Meanwhile, Wadhwani AI, a non-profit institute dedicated to developing AI solutions for social good, is exploring the use of generative AI to power Kissan (farmer) call centers. Alpan Raval, the chief scientist for AI/ML at Wadhwani AI, said that they are building a Kissan call center support system using generative AI to assist farmers with their queries.

Their approach involves augmenting human experts’ knowledge by utilising models that provide automated responses based on a knowledge base created from current government reports and documents, with a speech interface enabling conversational AI interactions.

KissanAI is another AI startup that has developed a proprietary AI chatbot that assists farmers with various agricultural tasks such as irrigation, pest control, and crop cultivation.Last year, it released Dhenu, the world’s first agriculture-specific LLM. Dhenu is designed to provide multilingual support, initially available in Hindi and English, making it accessible to a broad range of farmers across India.

The post OpenAI is Helping Farmers in India Increase Crop Yields appeared first on Analytics India Magazine.

Why intelligent brands are reverting to user-generated content amid the generative AI boom

The generative AI boom represents a watershed moment for the world of marketing, and every brand will soon be faced with a transformative decision to make: join the machines or beat them at their own game.

Out-innovating the large language models (LLMs) like ChatGPT that are capable of creating just about any form of content at a moment’s notice can be profoundly difficult and expensive in comparison to the alternative. But as the boom becomes all-encompassing, keeping things authentic could become a key desire for your leads.

If you’ve spent any time browsing social media in the past year, it’s likely that you’ll have seen a generative AI advertisement. These ads have already become popular among startups and SMEs as a means of creating quick, easy, and visually stimulating content without having to commit more resources to ideation and production than are necessary.

Why intelligent brands are reverting to user-generated content amid the generative AI boom

However, as we can see from the example above, ads using suspected generative AI images are already prompting a backlash among customers.

To announce that its products would be sold at Target, Nguyen Coffee Supply used what its followers believed was an AI to advertise the partnership on Instagram.

While the company’s reasons for taking this measure are unclear, it can be hard not to sympathize with the company which has 61,000 followers on Instagram and 17 employees according to PitchBook data.

At such a scale, it’s reasonable to expect a professionally created advertisement using artists would made a far deeper budgeting impact when machines can deliver similar results.

It’s for this reason that we may be at the beginning of a war of attrition between brand authenticity and generative AI marketing strategies.

The rise of the acoustic brand

At this stage, it’s important to acknowledge that generative AI is inevitable. Cost-effective marketing generally wins when it comes to caring for KPIs and conversion rate optimization (CRO) and for many firms, embracing the capabilities of LLMs is simply too big of an opportunity to miss out on simply because of its negative connotations.

However, anxiety over generative AI is real, and will only intensify when the current hype cycle gives way to implementation at enterprise level.

In terms of job security, 87% of marketers are worried about technology replacing their jobs. This sentiment will be shared across a vast range of industries as the technology continues to evolve.

How will this impact consumer expectations in the coming years? According to Gartner data it could leverage the rise of the ‘acoustic brand’.

The consulting firm anticipates that by 2027, “20% of brands will lean into positioning and differentiation predicated on the absence of AI in their business and products.”

This is based on the expectation that the generative AI hype cycle will give way to saturation and widespread connotations of inauthenticity within GenAI marketing materials.

As a change in consumer sentiment that will be linked to the rejection of generative AI content used by brands, we’ll see more consumers actively seeking out individuality within branding. This will manifest itself in a greater level of appreciation for brands with distinct personalities and shared values.

Amid markets saturated by the prevalence of cost-cutting artificial intelligence, customers will look to build connections with brands, which can help to boost retention and advocacy in a more sustained manner than today. This could even manifest itself in a wider rejection of AI chatbots as consumers look to have more satisfying and memorable experiences when engaging with brands.

This poses a dilemma for startups and SMEs. Generative AI will soon have the capabilities to produce ultra low-cost marketing materials and ads that can be ready for publication instantaneously, but its true value could undermine its effectiveness in an age where authenticity reigns supreme.

Fortunately, there’s a solution that can help to promote brand value and authenticity while remaining largely cost effective for brands across a wide range of industries: user-generated content (UGC).

Embracing UGC as the ultimate social proof

In an age where consumers are demanding more acoustic brands, user-generated content will become a leading consideration for marketers online.

What is UGC? The term refers to just about any original, brand-specific content that’s created by customers and published on social media or other channels online. UGC can also come in many forms, including testimonials, product reviews, images, videos, and all sorts of other formats.

Furthermore, UGC is an excellent tool for building trust, and can even influence purchase intent. Adweek data suggests that 85% of consumers believe that UGC is more influential than content made by brands.

User-generated content could also be a tool to bring a sense of creativity back to the marketing landscape, and could be important to prevent generative AI burnout from impacting your brand.

This has been made all the more pertinent in the wake of the user backlash aimed at Magic: The Gathering, which entered the headlines at the beginning of 2024 when fans accused the company of publishing a generative AI image just weeks after promising to only use art created by humans.

The company denied that it had used generative AI despite fans continuing to point telling signs of artificial intelligence, illustrating how businesses should take extra care in the content they create in the age of GenAI.

Optimizing brand values with UGC

So, how can your brand utilize user-generated content to optimize its brand values for less? There are plenty of approaches that can be embraced according to the personality your brand wants to utilize on social media and on-site.

One reliable approach is to set up competitions for users to submit images or multimedia posts where the best reply wins. These competitions could mean a free product or discount, or any other freebie that invites consumers to share pictures of them using products or posing with a products.

Why intelligent brands are reverting to user-generated content amid the generative AI boom

As an example, for National Pet Day, Starbucks published a user-submitted reel of images featuring their pets alongside the brand’s selection of drinks and products.

Apple has also historically been a strong proponent of UGC, and made a splash by centering its iPhone advertising campaign around the high-quality images captured by users with the smartphone’s high-spec camera.

Why intelligent brands are reverting to user-generated content amid the generative AI boom

However, UGC doesn’t have to be formulaic. Taco Bell engaged directly with one high-profile influencer as a means of showcasing a new product concept to a wider audience.

This can also work by conducting social listening to thank users for positive reviews and testimonials online.

By engaging more frequently with followers and your online audience, your opportunities to repurpose organice UGC grow, and it’s certainly worth taking the time to refine your tone of voice on social platforms to help generate a stronger level of authenticity that can become identifiable among your consumers.

Although it can be tricky for startups and SMEs to leverage a consistent brand on a cross-channel basis, utilizing a digital PR agency to establish a personality that stems from your on-site presence and beyond can be extremely advantageous–particularly as individuality become increasingly sought after in the age of generative AI.

Generative AI to complement marketing campaigns

Although generative AI will certainly become more omnipresent throughout the marketing landscape, ambitious brands are more likely to find success by using the technology to complement existing campaign strategies.

The combination of synthetic data and big data insights means that marketers have more analytical insights than ever before at their disposal, while LLMs like ChatGPT can be excellent tools to assist in content ideation and insight.

This symbiotic relationship between generative AI and human marketers means that campaigns can become more creative than ever and in an efficient manner while brands can cater to more acoustically aware consumers.

While it’s impossible to understand the impact that generative AI will have on marketing entirely, it’s essential that brands remain focused on audience sentiment towards the technology to understand how to formulate their campaigns in the future. By listening to your customers today, it’s possible to build the winning strategies of tomorrow.

Udemy Report: Which IT Skills Are Most in Demand in Q1 2024?

The tech industry courses people are taking online can tell a lot about which IT skills are in demand and what paths to careers look like today.

Udemy is an online learning platform that collects data quarterly about which courses on its platform are most in demand. We’ve dialed in on the tech and IT skills from their Q1 2024 report. Explore these in-demand IT skills to help choose where your tech career should go next.

Visit Udemy

Top 10 global emergent tech skills

The top 10 global emergent tech skill topics accessed on Udemy are:

  • Informatica PowerCenter.
  • Microsoft Playwright.
  • 1Z0-071: Oracle Database SQL Certified Associate certification.
  • CompTIA Security+ certification.
  • DP-203: Microsoft Azure Data Engineer Associate certification.
  • SAP FICO.
  • Data structures.
  • Java algorithms.
  • HashiCorp Terraform Associate certification.
  • Selenium Web Driver.

“Certifications are an incredibly strong currency in the tech community,” said Scott Rogers, senior vice president of instructor and content strategy at Udemy, in an email to TechRepublic. “They’re broadly recognized by companies worldwide who increasingly require certification for key technical roles in cloud computing, project management, and security.”

More than 10 million people enrolled in IT certification courses on Udemy in the last year, Rogers reported.

SEE: Self-upskilling could make you the right fit to fill in the IT talent gap in Australia. (TechRepublic)

Tech skills that cropped up in the top emergent skills on Udemy and are reflective of the tech industry include:

  • Git.
  • Oracle SQL.
  • C#.
  • Amazon AWS and AWS Certified Cloud Practitioner.
  • Python.

“We’ve seen customers leverage Udemy to support certification preparation and help IT professionals significantly improve their chances of passing certification exams,” Rogers said.

English as a second language, sustainability and ChatGPT were also popular topics

One of Udemy’s most popular courses is the TOEIC Tests, a standard for English as a second language appropriate for business. Udemy attributes this to multinational businesses using English as their primary language. Udemy learners were interested in Spanish as well, with that language appearing as sixth on the list of top 10 global emergent professional skills.

The full list is:

  1. TOEIC tests.
  2. Bookkeeping.
  3. Marketing strategy.
  4. Supply chain.
  5. Business analytics.
  6. Spanish language.
  7. Information security.
  8. Deep learning.
  9. Generative AI.
  10. Financial analysis.

Environmental, social and governance-related courses were remarkably popular. Overall, Udemy saw a 3,128% year-over-year increase in ESG course consumption. Only one topic grew in popularity more than ESG: ChatGPT. (Interest in ChatGPT-related courses grew by 5,226% in Q1 2023 alone.) ESG-related topics available as courses on Udemy include corporate sustainability, DEI and business cases for corporate ESG.

In-demand tech skills according to LinkedIn, Indeed, CompTIA

Udemy isn’t the only place to find courses on IT skills, and the skills popular on Udemy aren’t the only skills employers are looking for.

LinkedIn listed artificial intelligence and machine learning as the most in-demand IT skills in 2024, followed by data science and cybersecurity. Indeed found that generative AI was the highest-paid field in the industry. The Computing Technology Industry Association lists the following as the top five IT skills in demand in 2024:

  • Artificial intelligence.
  • Technical support.
  • Networking.
  • Cloud computing.
  • Linux.

Tech is a growing and lucrative field

According to the Bureau of Labor Statistics, there were nearly 364,000 open job postings for tech jobs in December 2023. Computer and information technology occupations are predicted to grow faster than most jobs from 2022 to 2032, and the median annual salary in the field was $104,420 in May 2023.

Tata Communications Announces CloudLyte, a Fully Automated Edge Computing Platform

Tata Communications recently unveiled a fully automated edge computing platform called CloudLyte, designed to empower future-ready enterprises to thrive in a data-driven world.

Tata Communications CloudLyte caters to the needs of global enterprises through its multi-access, cloud, and infrastructure-agnostic architecture.

Through its unique ‘solution in a box’ approach, Tata Communications CloudLyte provides enterprises with the platform, infrastructure, network, managed services, and the use case as a comprehensive unified offering. This approach enables swift deployment (within minutes) and effortless scaling as required, thus futureproofing investments.

With real-time inferencing and auto scaling, the platform seamlessly extends cloud capabilities to the edge, bringing in the agility and flexibility of the cloud.

Tata Communications CloudLyte also manages edge resources centrally for a seamless experience and has built-in security with features like zero-trust architecture and layered defences — simplifying enterprise operations, maximizing efficiency, and driving business growth.

“By bringing the cloud experience to the edge, Tata Communications CloudLyte allows global enterprises to harness the full potential of the cloud through a well-integrated cloud fabric. With Tata Communications CloudLyte — we’re not just building the future — we’re redefining it. From AI-driven predictive maintenance to retail analytics, the potential is limitless,” Neelakantan Venkataraman, vice president and global head – cloud and edge business, Tata Communications said.

The post Tata Communications Announces CloudLyte, a Fully Automated Edge Computing Platform appeared first on Analytics India Magazine.

Metadata management in data lakes

66179efd413898e299ceecf3_do7eNMNGZF3TCqqOIq58OQcsxBJeUb7RU8LOO3RWEHnzDVpSA-out-0

Metadata management is critical to data lake architecture, ensuring that data is well-organized, easily discoverable, and effectively utilized. As data lakes store vast amounts of raw data in their native format, managing metadata becomes essential to maintain data quality, improve data governance, and facilitate data analytics and reporting. This article explores the importance of metadata management in data lakes and discusses how ETL processes play a role in capturing, storing, and managing metadata effectively.

What is metadata?

Metadata refers to the data about data. It provides the content, structure, and context of the data stored in a data lake. Metadata includes attributes such as data type, source, creation date, last modified date, and relationships between different data sets.

Importance of metadata management in data lakes

Effective metadata management in data lakes offers several benefits:

Improved data discoverability

Well-managed metadata enables data analysts and scientists to quickly discover and access relevant data sets within the data lake. This accelerates the data discovery process, reduces data silos, and promotes data reuse across the organization.

Enhanced data quality and governance

Metadata management helps maintain data quality by providing information about data lineage, transformations applied, and quality checks performed during the ETL processes. This transparency ensures data integrity and trustworthiness, facilitating better data governance and compliance with regulatory requirements.

Facilitated data analytics and reporting

Metadata provides valuable insights into the structure and content of the data, enabling users to understand the data schema, relationships, and dependencies. This knowledge is crucial for data analytics, reporting, and deriving meaningful insights from the data lake.

ETL and metadata management

The ETL process serves as a linchpin in metadata management within data lakes. It facilitates the seamless movement and transformation of data and acts as a conduit for the acquisition and enrichment of critical metadata. Let’s delve into the multifaceted contributions of ETL at each stage of the data lifecycle in metadata management.

Metadata capture during extraction

The initial stage of the ETL process, extraction, is instrumental in capturing essential metadata about the source data. This metadata encompasses a myriad of details, such as:

  • Data Source Information: Identification of the source systems or applications from which the data originates, including database names, table names, and server details.
  • Extraction Timestamps: Accurate recording of the date and time when the data was extracted, facilitating traceability and ensuring data lineage can be established.
  • Source System Identifiers: Capture of unique identifiers or keys from the source system that allow for the tracing back to the original data source, aiding in data lineage tracking and validation.

By capturing this metadata during the extraction phase, ETL processes provide valuable context and lineage information that is crucial for understanding the data’s origin, quality, and history.

Metadata enrichment during transformation

The transformation phase of the ETL process is where data is cleaned, enriched, and structured to make it suitable for analysis and reporting. This phase also serves as an opportunity to enhance the metadata further by adding:

  • Transformation Details: Detailed documentation of the transformations applied to the data, such as data cleansing rules, data type conversions, and calculations, providing insights into the data transformation logic and ensuring repeatability and consistency.
  • Quality Metrics: Recording of data quality metrics, such as completeness, accuracy, and consistency checks performed during the transformation process, aiding in assessing data quality and compliance with quality standards.
  • Business Rules and Logic: Storage information about any business rules or logic applied to the data is essential for interpreting and analyzing the data correctly and ensuring alignment with business requirements.

By enriching the metadata during the transformation phase, ETL processes contribute to enhanced data governance, transparency, and compliance while facilitating better data analytics and insights generation.

Metadata storage during loading

Once the data has been transformed, it is loaded into the data lake. Alongside the data, the metadata captured and enriched during the extraction. Transformation phases is stored in the data lake or a dedicated metadata repository. This metadata storage includes:

  • Loading Timestamps: Accurate recording of the date and time when the data was loaded into the data lake. It facilitates data versioning and ensuring data freshness.
  • Data Schema and Structure: Storing information about the data schema, field definitions, relationships, and dependencies. Provides a comprehensive view of the data structure and aiding in data exploration and querying.
  • Metadata Cataloging: Organizing and cataloging the metadata to make it easily searchable and accessible for users, analysts, and data scientists. Promoting data discoverability, reuse, and collaboration across the organization.

By storing this metadata alongside the data, organizations can maintain a comprehensive and up-to-date repository of metadata. By providing valuable insights into the data’s structure, lineage, quality, and usage, thereby facilitating data-driven decision-making and innovation.

Benefits of ETL-driven metadata management

The seamless integration of ETL processes with metadata management in data lakes offers a multitude of benefits:

Improved data governance and compliance

ETL-driven metadata management enhances data governance by providing transparency into data lineage, transformations, and quality controls. This transparency ensures that data is managed, accessed, and used in compliance with organizational policies and regulatory requirements. It reduces data inconsistencies and non-compliance risks.

Enhanced data discovery and accessibility

By capturing and storing comprehensive metadata. ETL processes enable users to quickly discover, access, and understand the data within the data lake. This facilitates data reuse, reduces data silos, and promotes collaboration across the organization. Accelerating data-driven initiatives and fostering a culture of data-driven decision-making.

Facilitated data analytics and insights

The rich metadata captured and managed through ETL processes supports data analytics, reporting, and insights generation. It provides the necessary context, lineage, and quality information that analysts and data scientists require to derive meaningful insights, build accurate models, and make informed decisions. Thereby unlocking the full potential of the data lake for advanced analytics and innovation.

Conclusion

Metadata management is an important part of data lake architecture, supporting data discoverability, quality, governance, and analytics. ETL processes play a significant role in capturing, storing, and managing metadata throughout the data lifecycle. By implementing robust metadata management practices and leveraging ETL capabilities effectively. Organizations can maximize the value of their data lakes, enabling data-driven decision-making and fostering innovation across the enterprise.

Not all Indians are CEOs

Not all Indians (in the US) are CEOs

Last month, US Ambassador to India Eric Garcetti hailed Indian immigrants and said, “…now the joke is, you cannot become a CEO in America if you are not Indian. Whether it is Google, Microsoft, or Starbucks, people have come and made a big difference…”

#WATCH | Delhi: US Ambassador to India Eric Garcetti says, "The successes have happened, more than 1 in 10 CEOs of Fortune 500 companies now are Indian immigrants who studied in the US. The old joke was you could not become a CEO in the US if you are Indian, now the joke is you… pic.twitter.com/gTdvXng9mi

— ANI (@ANI) April 26, 2024

Responding to Garcetti’s remarks, Rajeev Chandrasekhar, MoS for Electronics and Information Technology, reflected on the transformation witnessed over the decades and said, “India is now home to #AllThingsTech and Indians are occupying leadership positions in almost all top companies of the world.”

Amidst this narrative of success, lies a question: Does every Indian inherently possess the potential to become a CEO, or is it a culmination of skill, struggle, and hard work that propels individuals to new heights of success?

While the success stories of Indian immigrants in the US are aplenty, the route to leadership is far from easy.

Story of every Indian in the US isn’t the same

Indian students studying in the United States, for instance, face significant challenges in securing even internships, attributed to a slowdown in job growth and heightened competition during election years. Factors such as rising inflation, increased cost of living, and sponsorship difficulties further compound their challenges.

As per a recent update, the US Citizenship and Immigration Services (USCIS) reported 780,884 H-1B registrations for FY 2024, an increase of 61% over the 483,927 registrations for FY 2023. But, there has been a significant decrease of nearly 40% in lottery applications for H-1B visas.

The USCIS selected only 14.6% of eligible H-1B registrations for FY 2024, based on a National Foundation for American Policy analysis of government data. That compares to 26.9% for FY 2023 and 43.8% for FY 2022. For FY 2021, nearly half, or 46.1%, were selected in the H-1B lottery process.

When Parag Agrawal was appointed the CEO of Twitter in 2021, it sparked discussions on the factors contributing to the success of Indian-origin individuals in senior positions within tech companies.

Some reports suggested that cultural shifts and strategic innovations spearheaded by Indian-origin CEOs played a pivotal role in the trajectory of companies like Microsoft and Google.

Meanwhile, Microsoft CEO Satya Nadella offered a different perspective in his book ‘Hit Refresh’. “Our industry does not respect tradition. What it respects is innovation,” he emphasised.

Indian Americans contribute to the US economy

Despite constituting only a small fraction of the US population, Indian Americans pose a significant influence through their contributions to the US economy. As highlighted by Tarak Hassan, a BU College of Arts & Sciences professor of economics, “The headline finding is that immigrants are good for local economic growth and, in particular, educated migrants are doing a lot of that.”

Where Were Unicorn Founders Born?
New result on immigrants & US economy
For 1,078 founders of 500 US unicorns, I identified founders' countries of birth. Conclusion: Over four out of ten unicorn founders are first gen immigrants.@StanfordGSB #startups #immigration #unicorn pic.twitter.com/07GbKrIopy

— Ilya Strebulaev (@IlyaStrebulaev) January 13, 2022

A recent study by the Migration Policy Institute (MPI) reveals that workers with immigrant backgrounds are prominently represented in the fields of science, technology, engineering, mathematics (STEM), and social sciences. In 2023, they accounted for 38% of the workforce in STEM-related occupations, where college-educated individuals prevail, with a median salary of $90,900 annually.

It is projected that these occupations will grow by a substantial 11% due to the increased adoption of AI and other technologies, digitalisation of the US economy, and the escalating threat of cyberattacks and data breaches.

As of now, India can’t rule America

Indian-Americans constitute the second-largest immigrant population in the US, estimated at around four million. However, it’s notable that Indians are not widely represented in leadership roles across many of the world’s top companies.

According to the CompTIA report, there are over 5,57,000 software and IT service companies in the US, with approximately 13,400 tech startups launched in 2019 alone. However, only a small fraction of individuals, who were previously Indian citizens but are now US citizens, hold higher positions there.

Despite India’s prominence in the tech industry and the presence of thousands of alumni from the IITs in the US since the 1960s, Indian passport holders occupying top positions in globally significant institutions remains rare.

Hope everyone gets the joke now.

The post Not all Indians are CEOs appeared first on Analytics India Magazine.

The Fascinating Reason behind Silicon Valley’s Love for the Word Grok

grok

You’ve likely heard Elon Musk discuss Grok, the witty open-source AI chatbot created by his company xAI. Have you ever pondered the reason behind his decision to rebrand his AI system from TruthGPT to Grok?

Interestingly, xAI is not the only Silicon Valley company that is fascinated by the name Grok. There is a California-based AI chip company also named Groq (but with a ‘q’), which introduced a language processing unit (LPU), a new end-to-end processing unit system, earlier this year.

That’s not all. Last year, New Relic, the company behind the leading observability platform, also named its AI-powered observability assistant New Relic Grok.

In a recent interaction with AIM, New Relic CEO Ashan Willy joked that Musk stole the name from New Relic. “We did not copyright it,” he remarked.

And that’s not all either. Canadian musician Claire Elise Boucher, popularly known as Grimes released an AI-powered range of toys, one of which is named Grok.

Grimes, who shares three children with Musk, collaborated with Silicon Valley startup Curio to introduce these products. Intriguingly, there’s no connection between Curio’s offerings and Musk’s Grok; in fact, Curio’s AI toys came onto the scene even before Musk’s AI chatbot.

Why is Grok so popular?

So the question does arise, why are so many Silicon Valley products named Grok? The answer could be linked to Robert A Heinlein’s 1961 science fiction novel Stranger in a Strange Land.

Heinlein coined the term ‘grok’. In his novel, Valentine Michael Smith, a Martian, employs the word ‘grok’, which denotes empathising to such a deep extent with others that you merge or blend with them, conveying a profound understanding of someone or something.

In short, Grok means to understand things intuitively and empathetically, and hence, it makes sense to name an AI system Grok in an era when we are chasing superintelligence.

While it’s not certain, Heinlein’s novel is definitely the source of inspiration for Silicon Valley’s fascination with the word grok.

In an interview, Musk mentioned Heinlein’s novel as one of his favourites, along with Isaac Asimov’s works. Musk also tweeted the name of the book last year, but without providing any context.

Musk’s keen interest in colonising Mars might also stem from his deep admiration for Heinlein’s and Asimov’s works, both of which prominently feature Mars in their narratives.

Grokking the code

Even before Musk made the name Grok popular, it was a widely used term, both as a verb and noun, in the programming space.

In 1995, Ric Holt designed a database query engine for manipulating collections of binary relations and named it Grok (now defunct). Jingwei Wu at the University of Waterloo wrote a Java re-implementation of Grok, called JGrok.

In 2006, a number of Zope developers created an open-source web framework based on Zope Toolkit (ZTK) technology called Grok.

Then there is Grok AIOps, which was developed by Grokstream. Over the years, the word grok has been adopted to convey a deep, intuitive understanding of a concept, particularly when working with complex systems or technologies.

In the context of programming, “grokking” refers to not just comprehending code or algorithms superficially but rather internalising them to the point of truly understanding their intricacies and implications.

Silicon Valley’s fascination with science fiction

Interestingly, this is not the first time Silicon Valley got influenced by popular science fiction. Over the years, we have seen many founders being heavily influenced by it.

When OpenAI announced that its professional plan for ChatGPT was priced at $42, many linked it to Douglas Adams’ The Hitchhiker’s Guide to the Galaxy, another popular science fiction.

In Chapter 27 of the novel, two programmers, Lunkwill and Fook, are chosen to ask the Ultimate Question to Deep Thought, a supernatural computer programmed to calculate the answer to the Ultimate Question of Life, the Universe and Everything. The supercomputer pauses for a moment and reveals the answer—42.

OpenAI CEO Sam Altman himself has made references to the number. Last year, when asked about the release date of the upcoming multimodal successor to GPT-3, in a nod to the book, Altman joked, saying that the model will take “a while” to finish and that it kept responding ‘42’ to every prompt.

Moreover, Grok is modelled after Adams’ popular novel, so intended to answer almost anything and, far harder, even suggest what questions to ask, xAI notes.

Similarly, Jeff Bezos, the founder of Amazon, has revealed his enduring love for Star Trek, a passion that dates back to his childhood. He attributes the inspiration for Amazon’s virtual assistant, Alexa, to the omniscient computer featured on the Starship Enterprise in the series.

The post The Fascinating Reason behind Silicon Valley’s Love for the Word Grok appeared first on Analytics India Magazine.

What Is generative AI audio? Everything you need to know

Generative AI is probably the best product from humankind since fire and baked bread.

This analogy stands valid with respect to its comparison with fire because when fire was discovered, people feared it. They saw fire as apocalyptic, capable of causing destruction. It was only when we as humans worked on domesticating fire that evolution fell in place.

Artificial Intelligence (AI), specifically Generative AI stands at a similar juncture. At one side, we have tech enthusiasts, who are supercharged about the possibility of its implications across domains and industries. On the other hand, there are skeptics, who believe AI to be an agent of doom, causing ripples of fear around talent obsolescence by AI tools.

Now, this conversation gains more heat with the rise of a very niche application of Gen AI in the space of audio. After enabling artists and writers with the tools to think, visualize, and create better, Gen AI has carved a unique place in the field of acoustics.

image

What is generative AI audio?

In the simplest of terms, this technology involves the generation of audio content with text as input or prompts. This audio content could range between anything from the generation of sound effects to an entire music album (more on its applications later).

The anatomy of generative AI audio

Converting texts as simple and vague as cinematic music for a horror short film comprising a string section is a daunting task. It’s a complex magic trick that involves layers of intricate technologies, techniques, and processes.

For the successful generation of audio that is as close to being passable involves techniques such as:

Tokenization – where data is broken down into discrete tokens that are individually analyzed and processed by Machine Learning (ML) algorithms at work. Each token is attributed to a distinct aspect of an audio signal such as pitch, scale, rhythm, and more.

Quantization – where continuous audio signals are represented as discrete values so the same generation technique deployed in LLMs (Large Language Models) can be utilized for audio generation.

Vectorization – that involves the transformation of audio signals into high-dimensional vector spaces to establish the relationship between diverse audio signals. ML models then identify and interpret patterns in signals to generate fresh audio or sound.

What Is generative AI audio? Everything you need to know

Applications of generative AI audio

While it’s easier to classify the applications of Gen AI audio into just music creation or deep fake audios, there are several unique and game-changing use cases of this technology that are highly relevant and required today.

Let’s explore some compelling ones.

Voiceovers and text to speech in EdTech

One of the most novel applications of Generative AI audio lies in the space of EdTech and infotainment, where artificial voice synthesizer technologies to generate voices of tutors and sound effects can be used to uplift storytelling in audiobooks, YouTube videos, course reading materials, eLearning modules, and more.

Sound designing

In the realm of movies and video games, where creativity knows no bounds, technicians are often compelled to invent fresh sounds. Imagine the sounds of Dune that made the auditory experience as immersive as possible; studios are required to push limits of familiarity and blend two or more everyday sounds into something unheard. Gen AI for audio can pull this off seamlessly through prompts that can be later tweaked by experts.

image-3

AI music creation

While this is a delicate topic that involves opinions and debates, it’s inevitable to acknowledge the power of Gen AI in creating music from scratch. From gaming to filmmaking, AI tools enable independent creators and artists with limited budget to elevate their content into something epic and cinematic.

Hyper-personalized chatbots

As brands and businesses race relentlessly to deliver the best of personalized experiences to customers, they can amplify this by a notch through Generative AI audio. Based on target audiences and demographics, chatbots can be trained to speak in specific ethnicities involving accents, dictions, and slangs people are familiar with for instant brand connection.

Real-time audio description for accessibility

Smartphones, devices, and even video streaming platforms have real-time audio description of content to simplify complexities for visually impaired people. Such autonomous generation of real-time audio content enables differently abled people to seamlessly achieve their everyday task, which otherwise might be constraining.

Challenges involved in generative AI audio development

Despite its potential being vast and niche, there are several bottlenecks stalling tech enthusiasts and businesses from leveraging the complete potential of this technology. Instead of taking a generic approach, let’s classify them into three unique aspects:

Technical and output-specific challenges

As fascinating as music generation from scratch sounds, the concept is still in its nascent stages. This means it is not devoid of technical concerns such as

  • poor audio quality
  • missing beats
  • robotic delivery of accents and voices
  • inconsistencies in real-time generation of audio
  • audio results completely in a different tangent from entered prompt
  • high latency and more.

Ethical constraints

This is a two-fold challenge that involves:

Deepfakes and misinformation, where synthesized audio morphed over an existing video or plain audio clips can be generated in the voices of targets to extract money or money’s worth or push specific agendas

Ownership and copyright that debates the perpetual question of who owns the music or sound generated by AI? Moreover, is it ethical to train on online data for audio synthesize without fair compensation to original creators?

Sourcing training datasets

The previous challenge is an excellent segue into this topic as black hat techniques prevail in training Generative AI audio models. With that said, there is also a pressing demand for quality audio training datasets that have clean audio customized to distinct requirements.

Bias is another critical concern in training Generative AI audio models, where subconscious input of audio data that is stereotypical, inaccurate or offensive could ruin and rupture outputs generated.

Microsoft is Starting to Look a Lot Like OpenAI with ‘MAI-1’

Microsoft is Starting to Look a lot Like OpenAI with ‘MAI-1’

Microsoft is done relying on others for its models. According to reports, though the company had been training smaller models like Orca and Phi all this while (using GPT and incorporating Meta’s Llama on its platform), this time it is training a model large enough to compete with others.

Referred to as MAI-1 (possibly Microsoft AI-1), the model is being developed internally by the company, and is around 500 billion parameters in size.

Its development is being headed by Mustafa Suleyman, formerly a leader in AI at Google and most recently CEO of the AI startup Inflection, who now oversees Microsoft’s AI division. In March, Microsoft acquired a majority of Inflection’s staff and paid $650 million for its intellectual property rights.

However, MAI-1 is a Microsoft-developed model, distinct from those previously developed by Inflection. While it may leverage training data and technology from the startup, it is an independent project.

Though the exact purpose has not been disclosed just yet, it is possible that Microsoft might incorporate its products into all its Copilot products. This would mean that the company will move away from OpenAI’s GPT and Codex models.

This comes in the backdrop of OpenAI partnering with Stack Overflow to improve its products. The big-tech company is also planning to release its search engine powered by Bing possibly as a way to compete with Google, which Microsoft has a history of doing.

No competition?

Microsoft’s strategy has highlighted three trends for 2024 – small language models, multimodal AI, and AI in science.

This time around, it seems like the company has decided to incorporate multimodal AI with large language models. The small language models will continue to be incorporated into the company’s on-edge use cases, such as laptops, while larger ones might be available in other core products.

All of this comes only weeks before the Microsoft Build conference. So it is possible that the model will be introduced at the conference. To train the new model, Microsoft has allocated a significant cluster of servers equipped with NVIDIA GPUs.

To make the case clear, Microsoft’s CTO Kevin Scott went on LinkedIn to explain that this is not in any way a competition with OpenAI. “I’m not sure why this is news, but just to summarise the obvious: we build big supercomputers to train AI models. Our partner OpenAI uses these supercomputers to train frontier-defining models; and then we both make these models available in products and services so that lots of people can benefit from them. We rather like this arrangement,” he said.

He further stated that the company has always been building bigger and better supercomputers for OpenAI to further AI research and wishes to continue this arrangement. “There’s no end in sight to the increasing impact that our work together will have,” he added.

So it seems like Sam Altman’s $50 billion dream of building AGI is going to get funded by Microsoft.

Scott also clarified that all the research that Microsoft does is about building models: “AI models turn out to be interesting things to work on, and our researchers do great work studying and building them.”

He also confirmed that there would be more models coming out soon, including MAI, Phi, and even Turing.

But is it true?

Even though Scott clarified that there is no competition between OpenAI and Microsoft, it is still worth a wonder if a model as big as MAI-1 would actually be used for Microsoft products instead of OpenAI’s GPT.

Moreover, it would be ideal for Microsoft to have a backup plan just in case the deal with OpenAI falls through, as has been the case with several others in the field. Chief Satya Nadella seems to be playing a different AI game. Under him, Microsoft has invested in all kinds of AI companies, from OpenAI and Mistral to Databricks and Figure AI.

Recently an email from 2019 resurfaced, in which Scott told the Microsoft team that they needed to invest in OpenAI as the competition with Google was rising and that their AI model was “scarily good”.

Suleyman recently posted on X saying, “AI is everything at Microsoft”. He also highlighted that the company is building massive products using AI and has a definite vision for Copilot.

Everything about this seems forced. There seems to be no other reason for Microsoft to build such large models and spend so much on compute if they’re not making it for commercial purposes. Moreover, on his hiring, Suleyman was touted as the “new” Sam Altman.

So, is Microsoft possibly becoming the OG of AI, aka OpenAI?

The post Microsoft is Starting to Look a Lot Like OpenAI with ‘MAI-1’ appeared first on Analytics India Magazine.

Meta’s AI tools for advertisers can now create full new images, not just new backgrounds

Meta’s AI tools for advertisers can now create full new images, not just new backgrounds Sarah Perez @sarahintampa / 9 hours

Meta is rolling out an expanded set of generative AI tools for advertisers, after first announcing a set of AI features last October. Now, instead of only being able to create different backgrounds for a product image, advertisers can also request full image variations, which offer AI-inspired ideas for the overall photo, including riffs that update the photo’s subject or product being advertised.

In one example, Meta shows how an existing ad creative showing a cup of coffee sitting outdoors next to coffee beans could be modified to present the cup, from a different angle, in front of lush greenery and coffee beans, evoking imagery reminiscent of a coffee farm.

This may not be a big deal if the image is only mean to encourage someone to visit a local coffee shop. But if it was the coffee cup itself that was for sale, then the AI variations Meta offers could be versions of the product that didn’t exist in real life.

The feature could be abused by advertisers who wanted to dupe consumers into buying products that don’t actually exist.

Meta admits this is a possible use case, saying that an advertiser could tailor the generated output with the coming Text Prompt feature with different colors of their product, from different angles and in different scenarios. Currently, the “different colors” option could be used to dupe customers into thinking a product looked different than it does in real life.

As Meta’s example demonstrates, the coffee cup itself could be transformed into different colors, or could be shown from different angles, where each cup has its own distinct swirl of foaming milk mixed in with the hot beverage.

However, Meta claims that it has strong guardrails in place to prevent its system from generating inappropriate ad content or low-quality images. This includes “pre-guardrails” to filter out images that its gen AI models don’t support and “post-guardrails” that filter out generated text and image content that doesn’t meet its quality bar or that it deems inappropriate. Plus, Meta said it stress-tested the feature using its Llama image and full ads image generation model with both internal and external experts to try to find unexpected ways it could be used, then addressed any vulnerabilities found.

Meta says this feature has already begun to roll out, and in the months ahead, advertisers will be able to provide text prompts to tailor the image’s variations, too.

Image Credits: Meta

Plus, Meta will now allow advertisers to add text overlays on their AI-generated images with a dozen of the most popular font typefaces available to choose from.

Another feature, image expansion, also introduced in October 2023, will now be available to Reels in addition to the Feed, across both Facebook and Instagram. This option leverages AI to help advertisers adjust their image assets to fit across different aspect ratios, like Reels and Feed. The idea is that advertisers could spend less time repurposing their creative assets for different surfaces. Meta says text overlay will work along with image expansion, too.

One advertiser, smartphone case maker Casetify, said that using Meta’s GenAI Background Generation feature led to a 13% increase in return on its ad spend. The company had tested the option with its Advantage+ shopping campaigns, where the AI features first became available in the fall. The updated AI features will also be available through Ads Manager via Advantage+ creative, as before.

Image Credits: Meta

Beyond images, Meta’s AI can be used to generate alternate versions of the ad headline, in addition to the ad’s primary text, which was already supported by leveraging the original copy. Meta says it’s testing the ability for this text to also sound like the brand’s voice and tone, using previous campaigns as its reference material. Text generation capabilities will be moved to Mets’s next-gen LLM (large language model), Meta Llama 3.

All the generative AI features will become available globally to advertisers by the end of the year.

Outside of the AI updates, Meta also announced it would expand its subscription service, Meta Verified for businesses, to new markets including Argentina, Mexico, Chile, Peru, France, and Italy. The service began testing last year in Australia, New Zealand and Canada.

Now, Meta Verified will offer four different tiers to its subscription plan, all with the base features of a verified badge, account support, and impersonation monitoring. Higher tiers will include new tools like profile enhancements, tools for creating connections, and more ways to access customer support.

Meta Verified will be expanded to WhatsApp soon, the company also said.

Meta debuts generative AI features for advertisers