Google Faces Significant Challenges and Competition as It Considers Charging for AI Search

For years, Google has dominated the online search market, with its search engine serving as the primary tool for billions of users seeking information on the web. However, the rise of AI-powered search competitors and Chatbots, such as OpenAI's ChatGPT and Perplexity AI, has begun to pose a significant threat to Google's long-standing supremacy. These emerging platforms leverage advanced natural language processing (NLP) and machine learning technologies to provide users with more sophisticated, conversational, and context-aware search experiences. As a result, Google finds itself in a position where it must adapt and innovate to maintain its competitive edge in the face of this new wave of AI-driven search disruption.

Google's Consideration of Premium AI Search Features

Faced with the growing pressure from AI search competitors, Google is reportedly exploring the possibility of introducing premium AI search features as a means to diversify its revenue streams and retain its user base. According to sources familiar with the matter, these advanced AI capabilities could potentially be integrated into Google's existing subscription services, such as Google One and Workspace, offering paying users access to more powerful and personalized search tools.

While the specifics of these premium AI search features remain unclear, it is believed that they would coexist alongside Google's core search engine, which would continue to be available for free to all users. This two-tiered approach would allow Google to cater to the varying needs and preferences of its user base, providing basic search functionality at no cost while offering more advanced AI-powered features to those willing to pay for a premium experience.

However, it is important to approach these reports with a degree of skepticism, as Google has not officially confirmed its plans to introduce premium AI search features. The company likely faces a difficult balancing act as it weighs the potential benefits of new revenue streams against the risk of alienating users who have grown accustomed to a free, accessible search experience. Moreover, the development and implementation of such features would undoubtedly require significant investments in research, infrastructure, and talent, all of which could strain Google's resources and profitability in the short term.

The Challenge of Monetizing AI Search

As Google contemplates the introduction of premium AI search features, the company faces a significant challenge in monetizing these advanced capabilities without compromising its existing revenue model. Traditionally, Google has relied heavily on advertising to generate income, with businesses paying to display their ads alongside search results. However, the incorporation of AI-powered search features could potentially disrupt this model, as users may be more likely to find the information they need directly within the search results, reducing the likelihood of clicking on ads.

Moreover, the development and deployment of AI search features require substantial computing power and resources, which could drive up operational costs for Google. The company must carefully consider how to balance the enhanced user experience provided by AI search with the financial feasibility of offering such features. Striking the right balance will be crucial to ensure that Google can sustainably deliver advanced search capabilities while maintaining its profitability in the long run.

Google's AI Search Experiments So Far

In an effort to stay ahead of the curve and explore the potential of AI-powered search, Google has already begun experimenting with various AI search features. The company has been testing AI-generated summaries that appear alongside traditional search results, providing users with concise, contextually relevant answers to their queries. These summaries aim to enhance the user experience by offering a more efficient and targeted way to access information, reducing the need to click through multiple links to find the desired content.

However, Google's AI search experiments have been limited to select user groups, as the company carefully assesses the impact of these features on user behavior and satisfaction. By gathering feedback and analyzing usage patterns, Google seeks to gain valuable insights into how AI search can be optimized to meet the evolving needs and expectations of its user base.

While these experiments represent a significant step forward in Google's AI search journey, they also highlight the challenges the company faces in balancing innovation with its existing business model. As Google continues to refine and expand its AI search capabilities, it must remain mindful of the potential trade-offs between enhanced user experience and the sustainability of its advertising-based revenue streams.

Ultimately, the success of Google's AI search initiatives will depend on its ability to strike a delicate balance between technological advancement and business viability. By carefully navigating the complexities of monetization, user experience, and competitive pressures, Google can position itself to thrive in the new era of AI-powered search while maintaining its position as a leader in the industry.

The Competitive Landscape

As Google grapples with the challenges of integrating AI into its search offerings, the competitive landscape continues to evolve at a rapid pace. One of the most notable players in this space is OpenAI's ChatGPT, which has taken the world by storm since its launch. ChatGPT's conversational interface and ability to provide detailed, context-aware responses to user queries have set a new standard for AI-powered search and raised expectations among users. The immense popularity of ChatGPT has put pressure on Google to innovate and adapt, as users increasingly seek more engaging and interactive search experiences.

Another significant competitor in the AI search arena is Perplexity AI. Perplexity AI differentiates itself with conversational search, by providing users with an ad-free search experience, and clearly citing the sources of its information. This approach resonates with users who value various sources brought together and have grown wary of the influence of advertising on traditional search engines. Perplexity AI's emphasis on conversational search and its ability to provide comprehensive, multi-faceted answers to complex queries further distinguish it from Google's current offerings.

As these and other AI search competitors continue to gain traction, Google must remain vigilant and proactive in its efforts to stay ahead of the curve. The company's success will depend on its ability to not only match the capabilities of its rivals but also to differentiate itself by leveraging its vast resources, expertise, and user base to deliver unique value propositions.

Implications and Outlook for Google

The potential introduction of premium AI search features by Google represents a significant shift in the company's business model and could have far-reaching implications for the search industry as a whole. By offering advanced AI capabilities as a paid service, Google is signaling a move away from its traditional reliance on advertising revenue and towards a more diversified, subscription-based model. This shift could pave the way for a new era of search, where users have greater control over their search experience and can choose between free, ad-supported services and premium, AI-powered offerings.

However, the transition to a premium AI search model is not easy by any means. Google must carefully navigate user expectations, striking a balance between providing valuable, advanced features and maintaining the accessibility and affordability that have made its search engine a ubiquitous tool for billions of users worldwide. The company must also grapple with the technological complexities of implementing AI at scale, ensuring that its search offerings remain reliable, accurate, and responsive to the ever-evolving needs of its user base.

Looking ahead, the future of search is likely to be shaped by the convergence of AI, user preferences, and business imperatives. As AI technologies continue to advance and users grow increasingly accustomed to conversational, context-aware search experiences, Google and its competitors will need to continually innovate and adapt to stay relevant. The success of premium AI search offerings will depend on the ability of companies to strike the right balance between technological sophistication, user-centric design, and financial sustainability.

Ultimately, the winners in the AI search race will be those who can most effectively harness the power of artificial intelligence to deliver truly transformative search experiences while also building robust, flexible business models that can withstand the test of time. As Google embarks on this new chapter in its search journey, it will need to draw on its deep reserves of talent, resources, and innovation to maintain its leadership position and shape the future of search in the age of AI.

Top 7 Model Deployment and Serving Tools

Top 7 Model Deployment and Serving Tools
Image by Author

Gone are the days when models were simply trained and left to collect dust on a shelf. Today, the real value of machine learning lies in its ability to enhance real-world applications and deliver tangible business outcomes.

However, the journey from a trained model to a production is filled with challenges. Deploying models at scale, ensuring seamless integration with existing infrastructure, and maintaining high performance and reliability are just a few of the hurdles that MLOPs engineers face.

Thankfully, there are many powerful MLOps tools and frameworks available nowadays to simplify and streamline the process of deploying a model. In this blog post, we will learn about the top 7 model deployment and serving tools in 2024 that are revolutionizing the way machine learning (ML) models are deployed and consumed.

1. MLflow

MLflow is an open-source platform that simplifies the entire machine learning lifecycle, including deployment. It provides a Python, R, Java, and REST API for deploying models across various environments, such as AWS SageMaker, Azure ML, and Kubernetes.

MLflow provides a comprehensive solution for managing ML projects with features such as model versioning, experiment tracking, reproducibility, model packaging, and model serving.

2. Ray Serve

Ray Serve is a scalable model serving library built on top of the Ray distributed computing framework. It allows you to deploy your models as microservices and handles the underlying infrastructure, making it easy to scale and update your models. Ray Serve supports a wide range of ML frameworks and provides features like response streaming, dynamic request batching, multi-node/multi-GPU serving, versioning, and rollbacks.

3. Kubeflow

Kubeflow is an open-source framework for deploying and managing machine learning workflows on Kubernetes. It provides a set of tools and components that simplify the deployment, scaling, and management of ML models. Kubeflow integrates with popular ML frameworks like TensorFlow, PyTorch, and scikit-learn, and offers features like model training and serving, experiment tracking, ml orchestration, AutoML, and hyperparameter tuning.

4. Seldon Core V2

Seldon Core is an open-source platform for deploying machine learning models that can be run locally on a laptop as well as on Kubernetes. It provides a flexible and extensible framework for serving models built with various ML frameworks.

Seldon Core can be deployed locally using Docker for testing and then scaled on Kubernetes for production. It allows users to deploy single models or multi-step pipelines and can save infrastructure costs. It is designed to be lightweight, scalable, and compatible with various cloud providers.

5. BentoML

BentoML is an open-source framework that simplifies the process of building, deploying, and managing machine learning models. It provides a high-level API for packaging your models into standardized format called "bentos" and supports multiple deployment options, including AWS Lambda, Docker, and Kubernetes.

BentoML's flexibility, performance optimization, and support for various deployment options make it a valuable tool for teams looking to build reliable, scalable, and cost-efficient AI applications.

6. ONNX Runtime

ONNX Runtime is an open-source cross-platform inference engine for deploying models in the Open Neural Network Exchange (ONNX) format. It provides high-performance inference capabilities across various platforms and devices, including CPUs, GPUs, and AI accelerators.

ONNX Runtime supports a wide range of ML frameworks like PyTorch, TensorFlow/Keras, TFLite, scikit-learn, and other frameworks. It offers optimizations for improved performance and efficiency.

7. TensorFlow Serving

TensorFlow Serving is an open-source tool for serving TensorFlow models in production. It is designed for machine learning practitioners who are familiar with the TensorFlow framework for model tracking and training. The tool is highly flexible and scalable, allowing models to be deployed as gRPC or REST APIs.

TensorFlow Serving has several features, such as model versioning, automatic model loading, and batching, which enhance performance. It seamlessly integrates with the TensorFlow ecosystem and can be deployed on various platforms, such as Kubernetes and Docker.

Final Thoughts

The tools mentioned above offer a range of capabilities and can cater to different needs. Whether you prefer an end-to-end tool like MLflow or Kubeflow, or a more focused solution like BentoML or ONNX Runtime, these tools can help you streamline your model deployment process and ensure that your models are easily accessible and scalable in production.

Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master's degree in technology management and a bachelor's degree in telecommunication engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.

More On This Topic

  • Serving ML Models in Production: Common Patterns
  • From Data Collection to Model Deployment: 6 Stages of a Data…
  • Back to Basics Week 4: Advanced Topics and Deployment
  • SAS Analytics Pro – now available for on-site or containerized…
  • A Full End-to-End Deployment of a Machine Learning Algorithm into a…
  • Machine Learning Model Development and Model Operations: Principles…

CognitiveLab Releases Indic LLM Leaderboard

CognitiveLab Releases Indic LLM Leaderboard

An Indic LLM leaderboard is finally here. CognitiveLab has released its Indic LLM Leaderboard for the growing number of Indic Language Models entering the scene without a uniform evaluation framework.

The Indic LLM Leaderboard offers support for 7 Indic languages including Hindi, Kannada, Tamil, Telugu, Malayalam, Marathi, and Gujarati, providing a comprehensive assessment platform. Hosted on Hugging Face, it supports 4 Indic benchmarks initially, with plans for additional benchmarks in the future.

Click here to check it out.

Along with this, Adithya S Kolavi, the founder of CognitiveLab has also unveiled the indic_eval evaluation framework which supports Arc Easy, Challenge, Hellaswag, MMLU, BoolQ, and Translation.

The leaderboard also seamlessly integrates with indic_eval, simplifying the process of uploading evaluation scores.

This entire system is deployed within India, ensuring robust security measures.

As an alpha release, the Leaderboard promises ongoing enhancements and tested features in subsequent updates. As of this release, base models ‘meta-llama/Llama-2-7b-hf’ and ‘google/gemma-7b’ have been added into the leaderboard to use as reference.

With a commitment to bolstering its capabilities, CognitiveLab aims to establish the Indic LLM Leaderboard as a pivotal tool for evaluating and advancing Indic Language Models.

The leaderboard operates by executing indic_eval on the chosen model, then transmitting the outcomes to a server for storage in a database. The Frontend Leaderboard subsequently accesses this server to retrieve the most recent models from the database, alongside their respective benchmarks and metadata.

In contrast to the Open LLM leaderboard, this project draws inspiration from it but introduces standardized evaluation with common benchmarks due to computational resource limitations. Users can conduct evaluations on their GPUs, while the leaderboard acts as a centralized platform for model comparisons.

To ensure reliability and consistency in the output, the company employed indictrans2 from AI4Bharat and other translation APIs to translate the benchmarking dataset into seven Indian languages.

In March, CognitiveLab introduced Ambari, an open-source Bilingual Kannada-English LLMs series. The initiative addresses the challenges posed by the dynamic landscape of LLMs, with a primary focus on bridging the linguistic gap between Kannada and English.

The post CognitiveLab Releases Indic LLM Leaderboard appeared first on Analytics India Magazine.

The Rise of Chief AI Officer

The recent news of adding CAIOs led to everyone talking about this role in creating safeguards around AI.

While the title sounds appealing, it comes with its share of uncertainty in terms of expectations and responsibilities.

The Rise of Chief AI Officer
Image by Editor

While bringing its unique distinctive capabilities to the table, this role sits at the intersection of many key roles, such as business executives, legal advisors, Chief Technology Officer (CTO), and Chief Data Officer (CDO) among others.

Aligning AI strategy with the business prerogatives, the CAIO also discusses the need (and plan) to modernize the existing architecture to be AI-ready, advising and overseeing the tech-maturity level with the CTO. Additionally, this AI executive role is also responsible for safeguarding AI initiatives while ensuring that are commercially viable, aligning with the CFO office.

Put simply, singular AI leadership is essential to effectively integrate AI into business and redefine the legacy way of business.

Now that we are aware of how this role interacts with its peers, let’s see why is there a need for such a role now. Why haven’t we heard more about CAIOs before?

Why are we Discussing this Role now?

In addition to the announcement of the importance of this role from the Biden administration, the rate at which the field of AI is evolving demands an AI executive who can sense the leading indicators of where the technology is headed, develop the innovation muscle at the enterprise and execute the ever-evolving roadmap faster than the competition.

The Rise of Chief AI Officer
Source: federaltimes.com

It is like preparing the organization to reap the benefits of leveraging AI most effectively and efficiently; instead of waiting on industry leaders to indicate the best use of this technology.

Supporting this belief with numbers, there has been an increasing trend in the title “Chief AI Officer” on LinkedIn, starting from 250 in 2020 to 781 such positions from the latest search.

It won’t be wrong to say that the companies investing in CAIO decisions are well-prepared to embrace AI technology and witness business growth in the coming years.

What does a Chief AI Officer do?

The CAIO can make everyone AI-aware, including peer executives, and contribute significantly to AI literacy programs. It is similar to creating an AI council, spearheaded by CAIO, that creates awareness and brings everyone together to take a collective call on leveraging AI to reshape their business.

One may argue for adding another executive position who needs to partner with their fellow peers to bring out the synergies, however identifying the AI opportunities without assuming unwanted risk requires expertise, which is materialistically different than running business as usual (BAU).

The CAIO comes with domain knowledge which powered by their systematic approach to adopting AI helps everyone align on picking the right initiatives, rather than going all over the place.

The Rise of Chief AI Officer
Image by Author

AI should never be something that is done for PR campaigns or just another checkboxing activity. It requires an executive who has a laser-sharp focus on overseeing its development and adoption responsibly.

Building an AI Culture

AI is not something that is done overnight. It is indeed a multi-year transformation journey that finds its roots in setting the right culture and taking everyone together. This one strategic hire can weave AI into the fabric of your whole organization.

  • Attracting AI talent is a big challenge in itself; however, the problem aggravates quickly when the organizations that are struggling with their AI roadmap begin to lose their data scientists. And, it is rightly so. The field demands keeping oneself up-to-date with the latest technology trends and being hands-on too. The moment DS teams realize that the company’s AI vision is more of a fancy aspiration rather than a strong business proposition, they would soon make a move. Hence, the presence of a CAIO solidifies the AI vision as well as affirms the team’s confidence that everyone’s working in the right direction toward making a meaningful business impact.
  • CAIO knows the language of AI and can articulate the technical requirements to DS and Engineering teams effectively. Such communication plays a crucial role in the seamless development cycle of AI products, reducing unnecessary iterations arising due to a lack of AI expertise.
  • One of the major issues that deter the progress of AI initiatives is executive buy-in. With the introduction of CAIOs, the buy-in becomes easier as the organization already has an AI executive who can communicate the possibilities with AI.

There is so much more to add to highlight the value-addition that this role brings, something that I will cover in a follow-up post, however, one thing is clear.

CAIOs are not just another fancy designation on board. They build and execute an AI strategy by deriving it from a business strategy and working in tandem with a data strategy.

Even if there are several other potential contenders like CDOs and CTOs that can extend themselves to this role, it is crucial to note that CAIOs carry the necessary expertise to perform the due diligence that AI demands and are aptly equipped to handle the unique challenges that come with AI.

Closing Remarks

Ultimately, it is not just about a title. Every organization might have a different designation that corresponds to CAIO. Their version of CAIO might be a VP, SVP, Director, or Head of AI doing the same work already.

It is more about the depth of knowledge and the ability to drive innovation and transformation in AI initiatives.

Vidhi Chugh is an AI strategist and a digital transformation leader working at the intersection of product, sciences, and engineering to build scalable machine learning systems. She is an award-winning innovation leader, an author, and an international speaker. She is on a mission to democratize machine learning and break the jargon for everyone to be a part of this transformation.

More On This Topic

  • GitHub Copilot and the Rise of AI Language Models in Programming Automation
  • Why Emily Ekdahl chose co:rise to level up her job performance as a…
  • Drag, Drop, Analyze: The Rise of No-Code Data Science
  • The Rise and Fall of Prompt Engineering: Fad or Future?
  • Beyond Human Boundaries: The Rise of SuperIntelligence
  • The Rise of ChatOps/LMOps

AI Crafts a Beautiful Poem in Honor of Kiran Bedi at Rising 2024

At India’s biggest diversity and inclusion summit, the Rising 2024, Kiran Bedi, the first woman to join the Indian Police Service (IPS), awed the audience.

In a fireside conversation led by Megha Sinha, the vice president of Genpact’s AI/ML practice, presented a captivating AI-generated poem honouring Bedi.

It beautifully captured Bedi’s essence:

A name etched in honour, a spirit so bold,

Kiran Bedi, a story that is yet to be old.

In a world carved for men, a woman, she stood,

For justice and fairness of force for the good.

With eyes filled with kindness, a heart full of might,

She shattered the ceiling glass.

A beacon so bright from streets calmed to reforms that took root,

Her legacy blossoms, a life-bearing fruit.

So here’s to the woman who broke every chain,

An inspiration for all sunshine or rain.

May her courage ignite us and her integrity bind,

Leaving footprints of hope for all humankind.

During her address, Bedi emphasised the importance of women prioritising their careers and having a purpose-driven approach. “That was the key for me. I grew up making my career self-reliant, self-sufficient, and made it right to the top. My first priority was my career. Nothing else mattered to me,” she asserted.

Bedi also shed light on the challenges she faced as a woman in a male-dominated field, particularly in the Indian Police Service, where women were underrepresented and often faced discrimination. Despite these obstacles, she never questioned her own abilities or let the biases of others deter her from pursuing her goals.”I never questioned my ability, till today. I learned to value myself right from my childhood, and I grew up so confidently taking on whatever came my way,” she said.

The former IPS officer stressed the importance of family support in enabling women to reach the top echelons of their careers. Recounting her own experience, Bedi said, “My mother and father left Amritsar and moved to Delhi when my daughter was ill because they thought this is what we brought Kiran up for.” She said she didn’t lag behind or have to make difficult choices between her career and family because of the support she got from her parents.

Bedi also highlighted the need for women to plan and manage their personal lives effectively to succeed in their careers. “Motherhood must be planned and if you are married and you’ve got families with you, please welcome them. Please find a place for them and give them all that will make them happy so that they stay at home with you,” she advised.

When asked about the low representation of women in top management positions, Bedi attributed it to the lack of prioritising careers and insufficient family support. She noted, “We don’t make it to the boardroom because we don’t get the kind of family support we do.” It’s all about management, she said. “We become mothers without planning for management. Every child is a project. And do you know what even husbands are projects! They’re all projects,” she remarked.

Bedi’s message to the audience was clear – women must prioritise their careers, seek family support, and manage their personal lives effectively to break the glass ceiling. She encouraged women to recharge their energy and focus on their goals, stating, “If you have to be totally career conscious, prioritise that in all possible ways while balancing all others and providing for. That’s my key. And that was the reason I made it to the top.”

Kiran Bedi’s inspiring presence at Rising 2024 and her powerful words left an indelible mark on the audience. Her journey serves as a beacon of hope for women aspiring to make it to the top in their chosen fields, reminding them that with determination, support, and effective management, no goal is unattainable.

The post AI Crafts a Beautiful Poem in Honor of Kiran Bedi at Rising 2024 appeared first on Analytics India Magazine.

Big Tech companies form new consortium to allay fears of AI job takeovers

Big Tech companies form new consortium to allay fears of AI job takeovers Kyle Wiggers 1 day

AI might not be coming for all jobs, but it might be coming for some.

UPS’s largest layoff in its 116-year history was the result of, in part, new technologies, including AI, CEO Carol Tomé said during an earnings call in February. Meanwhile, IBM plans to pause hiring for roles it thinks could soon be automated by AI, CEO Arvind Krishna told Bloomberg last year.

Workers aren’t optimistic about the future. In a recent survey from McKinsey, 25% of business professionals said that they expect their employer to lay off staff as a result of AI adoption. And, well, their pessimism isn’t misplaced. According to one estimate, around 4,000 workers have lost their jobs to AI since May. And in a poll from Beautiful.ai, which makes AI-powered presentation software, nearly half of managers said that they’re hoping to replace workers with AI.

But a cohort of Big Tech vendors and consultancies — called the AI-Enabled ICT Workforce Consortium (ITC) — aims to push back against the notion that AI will lead to job losses, citing the need for re-skilling and upskilling within the information and communication technology (ICT) industry specifically.

The ITC is being led by Cisco with support from Google, Microsoft, IBM (conspicuously), Intel, SAP and Accenture. The ITC’s mandate is to explore AI’s impact on jobs while enabling people to find AI-related training programs and connecting businesses to “skilled and job-ready” workers, a spokesperson told TechCrunch in a briefing.

“The ITC’s unique approach will research and evaluate the impact of AI on specific job roles, including skills and tasks, and recommend training for an AI-enabled ICT workforce,” the spokesperson said. “Consortium members and advisers share a common perspective that a greater sense of urgency is required to understand the impact of AI on key job roles within the ICT Industry.”

In the first phase of its work, the ITC will evaluate the impact of AI on 56 ICT job roles and provide training recommendations for the roles affected. These 56 roles, which the ITC hasn’t disclosed yet, were selected for their “strategic significance” in the broader ICT ecosystem and AI’s impact on the tasks required to perform the roles, the spokesperson said, as well as roles that offer “promising entry points” for low-level workers.

“These job roles include 80% of the top 45 ICT job titles garnering the highest volume of job postings for the period February 2023–2024 in the U.S. and five of the largest European countries by ICT workforce numbers (France, Germany, Italy, Spain and the Netherlands),” the spokesperson said. “Collectively, these countries account for a significant segment of the ICT sector, with a combined total of 10 million ICT workers.”

If the goal is to allay fears of a mass AI threatening of livelihoods, tech incumbents will need to deliver a lot more than vague promises and reports.

The ITC intends to publish its findings in a report this summer. And, beyond that, it hasn’t quite figured out a roadmap.

“The Consortium will determine its ‘phase 2’ scope in mid-2024,” the spokesperson said. “As we progress towards phase 2, the Consortium may consider extending invitations to other organizations and institutions to join our collaborative efforts in supporting the success of an AI-enabled ICT workforce.”

And therein lies the problem with industry consortiums like this.

If the goal is to allay fears of a mass AI threatening of livelihoods, tech incumbents will need to deliver a lot more than vague promises and reports. IBM has pledged to skill 2 million people in AI by 2030; Intel has said it’ll upskill over 30 million with AI in the same timeframe.

“Consortium members have established forward-thinking goals with skills development and training programs to positively impact over 95 million individuals around the world over the next 10 years,” the spokesperson said.

Yet it’s not clear how many AI roles will be available then.

According to a recent analysis by Lightcast, a labor market analytics firm, the demand for AI roles is decreasing, not increasing. In 2022, AI-related positions made up 2% of all job postings in the U.S. In 2023, that figure dipped to 1.6%.

“Consortium members commit to developing worker pathways particularly in job sectors that will increasingly integrate artificial intelligence technology,” the spokesperson said. “It’s a voluntary and transparent effort across companies to assess the impact and identify paths for upskilling and reskilling of technology roles most likely to be impacted by AI … We intend for this work to produce real, tangible recommendations that will address business and worker needs.”

I’ll reserve some judgment until we see those “real, tangible” recommendations. But I’d hope that, whatever form they take, they’re accompanied by courses of action — or any action, really. Big Tech has big promises to keep, particularly where it concerns the future of work and the tech industry’s role in shaping it.

AI to Impact 1.1 Bn Roles for People Across Culture, Genders and Abilities 

At the keynote of India’s biggest diversity and inclusive conference, The Rising 2024, Vidya Rao, chief technology and transformation officer, said that billions of roles are prone to radical transformation because of AI.

Rao said, “This seismic shift in the job market isn’t confined to a single demographic; rather, it encompasses a wide spectrum of individuals from diverse cultures, genders, and abilities.”

Further, she explained that such a transformative landscape presents a plethora of opportunities, necessitating that our workforce will become increasingly varied and multifaceted to meet these evolving challenges. Rao believes that embracing this diversity is not just a checkbox for organisations but a strategic imperative to drive meaningful change and progress.

Genpact has invested over $600 million in generative AI initiatives and is working on hundreds of GenAI PoCs across use cases. “AI is going to invade everything that we do,” said Rao, adding that technology is in the flow of life and that one cannot imagine a world without it.

Rao said that there is a clear understanding that embracing AI will drive meaningful change. “And have you arrived? I don’t think so. There is a very, very long way to go,” she added, saying that just having this agenda as one of the foundations to run your company will bring huge change.

The 9 Tenets of Diversity, Equity and Inclusion: Rao’s keynote addressed the importance of acknowledging and overcoming biases, personally committing to diversity, equity, and inclusion (DE&I), continuously upskilling, being courageous in communication, building supportive relationships, fostering inclusive communication, being an ally and advocate, engaging in mentorship and partnerships, and celebrating diversity to cultivate an innovative, inclusive, and adaptable tech environment.

With over 30 years of experience, Rao currently leads Genpact’s internal digital transformation processes. Her role also includes driving the global ERP programme and collaborating within Genpact’s partner ecosystem to support transformation at scale. Prior to Genpact, she worked with Accenture, performing diverse roles and leading technology delivery for multinational clients.

As a coach for budding women leaders, Rao takes great pride in mentoring the next generation of professionals. She has a degree in science, majoring in Statistics, from Mumbai University, India.

Equality vs Equity

Rao shared the importance of recognising and addressing unconscious biases that can prevent equal opportunities. She said true equity goes beyond treating everyone equally and ensuring everyone has the support and resources they need to succeed.

“Diversity sparks innovation, inclusion fosters belonging, equality ensures opportunities, and equity reduces barriers,” said Rao, succinctly capturing the essence of DEI (Diversity, Equity, and Inclusion) in the tech world. She outlined how each element contributes to a more dynamic and equitable working environment.

Sharing real-life anecdotes, Rao stressed the importance of creating a work environment where employees feel safe expressing themselves and taking risks. This will lead to constructive behaviours and business growth.

“Fostering an environment that can encourage courageous behaviours is very important because by providing an environment with psychological safety, you will see that people feel included and truly come up with constructive behaviours that will flourish their own businesses.”

Building Strong Relationships and Investing in Support Systems

“You can have the best support system, but it will not work if you do not invest time in building that relationship—be it your parents, in-laws, family, or even your support staff, maids, drivers, etc.,” said Rao. She said that one should invest in relationships; not just as a transaction, but as an integral part of social responsibility and human upliftment.

She believes that all successful people have a fantastic support system behind them. “It is not easy to be successful if you do not have a system, and that comes only by investing time and building relationships,” said Rao.

Further, Rao advocated for diversity at all levels, urging disruption of the status quo to create a more inclusive world for future generations. “We need diverse talent, leadership, and workforce. We have to challenge, disrupt the status quo, and create a world that is more inclusive for the coming generations,” she added.

In concluding remarks, Rao highlighted the importance of women supporting each other professionally, fostering collaboration instead of competition. “Women helping women is very important in this agenda. It’s not just about competition but about raising each other up,” she concluded.

The post AI to Impact 1.1 Bn Roles for People Across Culture, Genders and Abilities appeared first on Analytics India Magazine.

India, grappling with election misinfo, weighs up labels and its own AI safety coalition

India, grappling with election misinfo, weighs up labels and its own AI safety coalition

An Adobe-backed association wants to help organizations in the country with an AI standard

Jagmeet Singh 23 hours

India, long in the tooth when it comes to co-opting tech to persuade the public, has become a global hot spot when it comes to how AI is being used, and abused, in political discourse, and specifically the democratic process. Tech companies, who built the tools in the first place, are making trips to the country to push solutions.

Earlier this year, Andy Parsons, a senior director at Adobe who oversees its involvement in the cross-industry Content Authenticity Initiative (CAI), stepped into the whirlpool when he made a trip to India to visit with media and tech organizations in the country to promote tools that can be integrated into content workflows to identify and flag AI content.

“Instead of detecting what’s fake or manipulated, we as a society, and this is an international concern, should start to declare authenticity, meaning saying if something is generated by AI that should be known to consumers,” he said in an interview.

Parsons added that some Indian companies — currently not part of a Munich AI election safety accord signed by OpenAI, Adobe, Google and Amazon in February — intended to construct a similar alliance in the country.

“Legislation is a very tricky thing. To assume that the government will legislate correctly and rapidly enough in any jurisdiction is something hard to rely on. It’s better for the government to take a very steady approach and take its time,” he said.

Detection tools are famously inconsistent, but they are a start in fixing some of the problems, or so the argument goes.

“The concept is already well understood,” he said during his Delhi trip. “I’m helping raise awareness that the tools are also ready. It’s not just an idea. This is something that’s already deployed.”

Andy Parsons, senior director at Adobe

Andy Parsons, senior director at Adobe. Image Credits: Adobe

The CAI — which promotes royalty-free, open standards for identifying if digital content was generated by a machine or a human — predates the current hype around generative AI: It was founded in 2019 and now has 2,500 members, including Microsoft, Meta, and Google, The New York Times, The Wall Street Journal and the BBC.

Just as there is an industry growing around the business of leveraging AI to create media, there is a smaller one being created to try to course-correct some of the more nefarious applications of that.

So in February 2021, Adobe went one step further into building one of those standards itself and co-founded the Coalition for Content Provenance and Authenticity (C2PA) with ARM, BBC, Intel, Microsoft and Truepic. The coalition aims to develop an open standard, which taps the metadata of images, videos, text and other media to highlight their provenance and tell people about the file’s origins, the location and time of its generation, and whether it was altered before it reached the user. The CAI works with C2PA to promote the standard and make it available to the masses.

Now it is actively engaging with governments like India’s to widen the adoption of that standard to highlight the provenance of AI content and participate with authorities in developing guidelines for AI’s advancement.

Adobe has nothing but also everything to lose by playing an active role in this game. It’s not — yet — acquiring or building large language models (LLMs) of its own, but as the home of apps like Photoshop and Lightroom, it’s the market leader in tools for the creative community, and so not only is it building new products like Firefly to generate AI content natively, but it is also infusing legacy products with AI. If the market develops as some believe it will, AI will be a must-have in the mix if Adobe wants to stay on top. If regulators (or common sense) have their way, Adobe’s future may well be contingent on how successful it is in making sure what it sells does not contribute to the mess.

The bigger picture in India in any case is indeed a mess.

Google focused on India as a test bed for how it will bar use of its generative AI tool Gemini when it comes to election content; parties are weaponizing AI to create memes with likenesses of opponents; Meta has set up a deepfake “helpline” for WhatsApp, such is the popularity of the messaging platform in spreading AI-powered missives; and at a time when countries are sounding increasingly alarmed about AI safety and what they have to do to ensure it, we’ll have to see what the impact will be of India’s government deciding in March to relax rules on how new AI models are built, tested and deployed. It’s certainly meant to spur more AI activity, at any rate.

Using its open standard, the C2PA has developed a digital nutrition label for content called Content Credentials. The CAI members are working to deploy the digital watermark on their content to let users know its origin and whether it is AI-generated. Adobe has Content Credentials across its creative tools, including Photoshop and Lightroom. It also automatically attaches to AI content generated by Adobe’s AI model Firefly. Last year, Leica launched its camera with Content Credentials built in, and Microsoft added Content Credentials to all AI-generated images created using Bing Image Creator.

Content Credentials on an AI-generated image

Image Credits: Content Credentials

Parsons told TechCrunch the CAI is talking with global governments on two areas: one is to help promote the standard as an international standard, and the other is to adopt it.

“In an election year, it’s especially critical for candidates, parties, incumbent offices and administrations who release material to the media and to the public all the time to make sure that it is knowable that if something is released from PM [Narendra] Modi’s office, it is actually from PM Modi’s office. There have been many incidents where that’s not the case. So, understanding that something is truly authentic for consumers, fact-checkers, platforms and intermediaries is very important,” he said.

India’s large population, vast language and demographic diversity make it challenging to curb misinformation, he added, a vote in favor of simple labels to cut through that.

“That’s a little ‘CR’ … it’s two western letters like most Adobe tools, but this indicates there’s more context to be shown,” he said.

Controversy continues to surround what the real point might be behind tech companies supporting any kind of AI safety measure: Is it really about existential concern, or just having a seat at the table to give the impression of existential concern, all the while making sure their interests get safeguarded in the process of rule making?

“It’s generally not controversial with the companies who are involved, and all the companies who signed the recent Munich accord, including Adobe, who came together, dropped competitive pressures because these ideas are something that we all need to do,” he said in defense of the work.

OpenAI expands its custom model training program

OpenAI expands its custom model training program Kyle Wiggers 17 hours

OpenAI is expanding a program, Custom Model, to help enterprise customers develop tailored generative AI models using its technology for specific use cases, domains and applications.

Custom Model launched last year at OpenAI’s inaugural developer conference, DevDay, offering companies an opportunity to work with a group of dedicated OpenAI researchers to train and optimize models for specific domains. “Dozens” of customers have enrolled in Custom Model since. But OpenAI says that, in working with this initial crop of users, it’s come to realize the need to grow the program to further “maximize performance.”

Hence assisted fine-tuning and custom-trained models.

Assisted fine-tuning, a new component of the Custom Model program, leverages techniques beyond fine-tuning — such as “additional hyperparameters and various parameter efficient fine-tuning methods at a larger scale,” in OpenAI’s words — to enable organizations to set up data training pipelines, evaluation systems and other supporting infrastructure toward bolstering model performance on particular tasks.

As for custom-trained models, they’re custom models built with OpenAI — using OpenAI’s base models and tools (e.g. GPT-4) — for customers that “need to more deeply fine-tune their models” or “imbue new, domain-specific knowledge,” OpenAI says.

OpenAI gives the example of SK Telecom, the Korean telecommunications giant, who worked with OpenAI to fine-tune GPT-4 to improve its performance in “telecom-related conversations” in Korean. Another customer, Harvey — which is building AI-powered legal tools with support from the OpenAI Startup Fund, OpenAI’s AI-focused venture arm — teamed up with OpenAI to create a custom model for case law that incorporated hundreds of millions of words of legal text and feedback from licensed expert attorneys.

“We believe that in the future, the vast majority of organizations will develop customized models that are personalized to their industry, business, or use case,” OpenAI writes in a blog post. “With a variety of techniques available to build a custom model, organizations of all sizes can develop personalized models to realize more meaningful, specific impact from their AI implementations.”

OpenAI custom models

Image Credits: OpenAI

OpenAI is flying high, reportedly nearing an astounding $2 billion in annualized revenue. But there’s surely internal pressure to maintain pace, particularly as the company plots a $100 billion data center co-developed with Microsoft (if reports are to be believed). The cost of training and serving flagship generative AI models isn’t coming down anytime soon after all, and consulting work like custom model training might just be the thing to keep revenue growing while OpenAI plots its next moves.

Fine-tuned and custom models could also lessen the strain on OpenAI’s model serving infrastructure. Tailored models are in many cases smaller and more performant than their general-purpose counterparts, and — as the demand for generative AI reaches a fever pitch — no doubt present an attractive solution for a historically compute-capacity-challenged OpenAI.

Alongside the expanded Custom Model program and custom model building, OpenAI today unveiled new model fine-tuning features for developers working with GPT-3.5, including a new dashboard for comparing model quality and performance, support for integrations with third-party platforms (starting with the AI developer platform Weights & Biases) and enhancements to tooling. Mum’s the word on fine-tuning for GPT-4, however, which launched in early access during DevDay.

OpenAI expands its custom model training program

OpenAI expands its custom model training program Kyle Wiggers 15 hours

OpenAI is expanding a program, Custom Model, to help enterprise customers develop tailored generative AI models using its technology for specific use cases, domains and applications.

Custom Model launched last year at OpenAI’s inaugural developer conference, DevDay, offering companies an opportunity to work with a group of dedicated OpenAI researchers to train and optimize models for specific domains. “Dozens” of customers have enrolled in Custom Model since. But OpenAI says that, in working with this initial crop of users, it’s come to realize the need to grow the program to further “maximize performance.”

Hence assisted fine-tuning and custom-trained models.

Assisted fine-tuning, a new component of the Custom Model program, leverages techniques beyond fine-tuning — such as “additional hyperparameters and various parameter efficient fine-tuning methods at a larger scale,” in OpenAI’s words — to enable organizations to set up data training pipelines, evaluation systems and other supporting infrastructure toward bolstering model performance on particular tasks.

As for custom-trained models, they’re custom models built with OpenAI — using OpenAI’s base models and tools (e.g. GPT-4) — for customers that “need to more deeply fine-tune their models” or “imbue new, domain-specific knowledge,” OpenAI says.

OpenAI gives the example of SK Telecom, the Korean telecommunications giant, who worked with OpenAI to fine-tune GPT-4 to improve its performance in “telecom-related conversations” in Korean. Another customer, Harvey — which is building AI-powered legal tools with support from the OpenAI Startup Fund, OpenAI’s AI-focused venture arm — teamed up with OpenAI to create a custom model for case law that incorporated hundreds of millions of words of legal text and feedback from licensed expert attorneys.

“We believe that in the future, the vast majority of organizations will develop customized models that are personalized to their industry, business, or use case,” OpenAI writes in a blog post. “With a variety of techniques available to build a custom model, organizations of all sizes can develop personalized models to realize more meaningful, specific impact from their AI implementations.”

OpenAI custom models

Image Credits: OpenAI

OpenAI is flying high, reportedly nearing an astounding $2 billion in annualized revenue. But there’s surely internal pressure to maintain pace, particularly as the company plots a $100 billion data center co-developed with Microsoft (if reports are to be believed). The cost of training and serving flagship generative AI models isn’t coming down anytime soon after all, and consulting work like custom model training might just be the thing to keep revenue growing while OpenAI plots its next moves.

Fine-tuned and custom models could also lessen the strain on OpenAI’s model serving infrastructure. Tailored models are in many cases smaller and more performant than their general-purpose counterparts, and — as the demand for generative AI reaches a fever pitch — no doubt present an attractive solution for a historically compute-capacity-challenged OpenAI.

Alongside the expanded Custom Model program and custom model building, OpenAI today unveiled new model fine-tuning features for developers working with GPT-3.5, including a new dashboard for comparing model quality and performance, support for integrations with third-party platforms (starting with the AI developer platform Weights & Biases) and enhancements to tooling. Mum’s the word on fine-tuning for GPT-4, however, which launched in early access during DevDay.