Some Generative AI Company Employees Pen Letter Wanting ‘Right to Warn’ About Risks

Some current and former employees of OpenAI, Google DeepMind and Anthropic published a letter on June 4 asking for whistleblower protections, more open dialogue about risks and “a culture of open criticism” in the major generative AI companies.

The Right to Warn letter illuminates some of the inner workings of the few high-profile companies that sit in the generative AI spotlight. OpenAI holds a distinct status as a nonprofit trying to “navigate massive risks” of theoretical “general” AI.

For businesses, the letter comes at a time of increasing pushes for adoption of generative AI tools; it also reminds technology decision-makers of the importance of strong policies around the use of AI.

Right to Warn letter asks frontier AI companies not to retaliate against whistleblowers and more

The demands are:

  1. For advanced AI companies not to enforce agreements that prevent “disparagement” of those companies.
  2. Creation of an anonymous, approved path for employees to express concerns about risk to the companies, regulators or independent organizations.
  3. Support for “a culture of open criticism” in regards to risk, with allowances for trade secrets.
  4. An end to whistleblower retaliation.

The letter comes about two weeks after an internal shuffle at OpenAI revealed restrictive nondisclosure agreements for departing employees. Allegedly, breaking the non-disclosure and non-disparagement agreement could forfeit employees’ rights to their vested equity in the company, which could far outweigh their salaries. On May 18, OpenAI CEO Sam Altman said on X that he was “embarrassed” by the potential for withdrawing employees’ vested equity and that the agreement would be changed.

Of the OpenAI employees who signed the Right to Warn letter, all current workers contributed anonymously.

What potential dangers of generative AI does the letter address?

The open letter addresses potential dangers from generative AI, naming risks that “range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

OpenAI’s stated purpose has, since its inception, been to both create and safeguard artificial general intelligence, sometimes called general AI. AGI means theoretical AI that is smarter or more capable than humans, which is a definition that conjures up science-fiction images of murderous machines and humans as second-class citizens. Some critics of AI call these fears a distraction from more pressing concerns at the intersection of technology and culture, such as the theft of creative work. The letter writers mention both existential and social threats.

How might caution from inside the tech industry affect what AI tools are available to enterprises?

Companies that are not frontier AI companies but may be deciding how to move forward with generative AI could take this letter as a moment to consider their AI usage policies, their security and reliability vetting around AI products and their process of data provenance when using generative AI.

SEE: Organizations should carefully consider an AI ethics policy customized to their business goals.

Juliette Powell, co-author of “The AI Dilemma” and New York University professor on the ethics of artificial intelligence and machine learning, has studied the results of protests by employees against corporate practices for years.

“Open letters of caution from employees alone don’t amount to much without the support of the public, who have a few more mechanisms of power when combined with those of the press,” she said in an email to TechRepublic. For example, Powell said, writing op-eds, putting public pressure on companies’ boards or withholding investments in frontier AI companies might be more effective than signing an open letter.

Powell referred to last year’s request for a six month pause on the development of AI as another example of a letter of this type.

“I think the chance of big tech agreeing to the terms of these letters – AND ENFORCING THEM – are about as probable as computer and systems engineers being held accountable for what they built in the way that a structural engineer, a mechanical engineer or an electrical engineer would be,” Powell said. “Thus, I don’t see a letter like this affecting the availability or use of AI tools for business/enterprise.”

OpenAI has always included the recognition of risk in its pursuit of more and more capable generative AI, so it’s possible this letter comes at a time when many businesses have already weighed the pros and cons of using generative AI products for themselves. Conversations within organizations about AI usage policies could embrace the “culture of open criticism” policy. Business leaders could consider enforcing protections for employees who discuss potential risks, or choosing to invest only in AI products they find to have a responsible ecosystem of social, ethical and data governance.

How to sign up for Google Labs — and 5 reasons why you should

google-labs

When you think of Google's artificial intelligence (AI) technology, you might only think of its chatbot Gemini. However, the company has many other generative AI experiments that can help you enhance your workflow, generate music, organize your documents, and more. Although Google hasn't officially released many of these projects, you can try them out via Google Labs, the company's platform for testing ideas and products.

One of the perks of trying the latest Google Labs experiments is that they are constantly updated to include new features, guaranteeing some unique and helpful experiences for users. For example, NotebookLM just got some major upgrades that make the AI-powered note-taking, research, and writing assistant even more useful — and you can access those features for free via Labs.

Also: I've tested dozens of AI chatbots since ChatGPT's debut. Here's my new top pick

Another major Labs perk is that users can provide feedback, ultimately impacting whether the experiments are deployed and what changes are made before they are released. Keep reading to learn why you should join Google Labs and how to access each experiment.

1. AI Overviews in Search

When Google unveiled the Search Generative Experience (SGE), its AI-infused version of Google Search, you had to opt into Search Labs to get access. By opting in, you gained access to AI insights at the top of your search results page, which summarizes the information Google expects will satisfy your query.

Also: Google's AI Overviews appear on 70% fewer Search results pages now

At Google I/O, the company announced that AI Overviews, supported by a new Gemini model customized for Google Search, are now available to everyone in the US. Even though the feature is rolling out broadly, you will get priority access if you opt into AI Overviews in Labs. So, if this feature interests you, you should sign up.

2. NotebookLM

Last summer, Google launched NotebookLM, its "AI-first Notebook", which works with the content you input to summarize, explain, and provide key topics and questions you can ask to understand the material better.

Also: How to use Google's AI-powered NotebookLM to organize your research

You can insert a Google Doc, a PDF, Google Slides, and URLs and then ask questions about the content or have NotebookLM auto-generate content from your inputs. This feature can be helpful if you are a student. You can input all your class notes and materials into NotebookLM and the tech will help you stay organized and add AI assistance to your notes.

NotebookLM can generate study guides, briefing docs, FAQs, summaries, and more. The tech can chat with you about the content and answer any questions. To test the features out, I inserted a PDF of one of my articles, and then, within seconds, NotebookLM provided me with an accurate AI-generated summary. You can see the results in the image above.

3. MusicFX

You no longer need musical expertise to generate songs. Now you can use AI to create tunes with MusicFX.

Also: ElevenLab's AI sound effect generator has finally launched. Listen for yourself

All you need to do is type in a prompt of what you'd like to hear, and then, within seconds, your track will be available for your listening pleasure. You can even download or share your masterpiece. Be aware that MusicFX is more of a fun, experimental tool than one that will increase your productivity.

4. Illuminate

Research papers tend to be long and use a lot of technical jargon that can be difficult to understand. Illuminate is a new Google Labs experiment that aims to help you break down research papers into short audio conversations.

Also: 6 ways Apple can leapfrog OpenAI, Microsoft, and Google at WWDC 2024

Illuminate uses AI to adjust the content to your learning preferences so that you can understand the material. You can access the waitlist and learn about Illuminate by visiting Google Labs.

5. Submitting your generative AI experiment

If you are a developer working on personal AI projects, you can share them with Google for a chance to be featured in the tech giant's experiment gallery. Google says: "We're looking for projects that push the boundaries of what code can do. Projects with unique visual aesthetics. Projects that help inspire other coders." The page on Labs about the opportunity includes different criteria and a form to submit your experiment's details.

FAQs

How do you join Google Labs?

If you want to try any of these or future experiments, sign up for Google Labs. All you have to do is visit the Google Labs homepage and click on the experiment you want to try.

Also: I was a Copilot diehard until ChatGPT added these 5 features

Depending on the experiment, the sign-up process may vary. But generally, you will be prompted to sign in to your personal Google account or create a new one. Remember to use your account as workplace accounts can block experimental features.

Other experiments have specific instructions for early users. If the experiment you signed up for has a waitlist, keep a close eye on your email, as you will be notified when you are removed from the list.

Artificial Intelligence

C5i Leverages GenAI-Based Solutions in the Life Sciences Industry

C5i Leverages GenAI-Based Solutions in the Life Sciences Industry

Several decision processes stand to be impacted by the power of generative AI in the life sciences field. GenAI-based solutions can aid in numerous tasks, including analysing vast amounts of structured data to deliver meaningful insights and enhancing the ability to digest and search diverse unstructured data to improve several decision-making workflows.

Here are the top areas where C5i is driving the empowerment of our customers through GenAI-based solutions.

AI-Powered MLR Review for Content Creation

Promotional materials created for healthcare professionals (HCP) and patients are complex and require a significant amount of vetting and review. GenAI can help automate the review of essential documents, summarising key issues. This frees up professionals working in the medical, legal and regulatory fields to focus on higher-level tasks.

Optimised Sales Force Enablement

Actionable insights for sales reps can be enhanced using GenAI to craft impactful presentations. This is done through analysing diverse data sources with customised insights, visualisations and analyses to enable optimal conversation paths based on HCP profiles and past interactions, maximising efficiency and message relevance.

Holistic Brand Performance Analysis

A pharmaceutical brand’s performance depends on a multi-dimensional view cutting across customer engagement, market access enablement, patient support, competition analysis and more. These insights are available in structured data, both internal and syndicated, and unstructured data, including internal and market research data. Brands can be enabled with a comprehensive view of performance with prioritised and consumer insights being pushed to the brand managers.

Patient Engagement

GenAI empowers patient engagement by personalising interactions and interventions. Advanced algorithms analyse patient data, generating tailored recommendations and support strategies. This approach enhances communication between patients and healthcare providers, promoting adherence to treatment plans and preventive care.

Ethical considerations are crucial when adopting GenAI, where ensuring integrity, trust, and fairness in data-driven decision-making is a priority. By adhering to ethical guidelines, the industry can foster responsible innovation, ensuring that generative AI solutions contribute positively to healthcare delivery and patient outcomes.

About C5i (Course5 Intelligence Limited)

C5i is a pure-play AI & Analytics provider that combines the power of human perspective with AI technology to deliver trustworthy intelligence. The company drives value through a comprehensive solution set, integrating multifunctional teams that have technical and business domain expertise with a robust suite of products, solutions, and accelerators tailored for various horizontal and industry-specific use cases. At the core, C5i’s focus is to deliver business impact at speed and scale by driving adoption of AI-assisted decision-making.

C5i caters to some of the world’s largest enterprises, including many Fortune 500 companies. The company’s clients span Technology, Media, and Telecom (TMT), Pharma & Lifesciences, CPG, Retail, Banking, and other sectors. C5i has been recognized by leading industry analysts like Gartner and Forrester for its Analytics and AI capabilities and proprietary AI-based platforms.

The post C5i Leverages GenAI-Based Solutions in the Life Sciences Industry appeared first on AIM.

10 Essential DevOps Tools Every Beginner Should Learn

10 Essential DevOps Tools Every Beginner Should Learn cover photo
Image by Author | ChatGPT & Canva

DevOps (Development Operations) and MLOps (Machine Learning Operations) are almost the same and share a wide variety of tools. As a DevOps engineer, you will deploy, maintain, and monitor applications, whereas as an MLOps engineer, you deploy, manage, and motor manufacturing models into production. So, it is beneficial to learn about DevOps tools as it opens a wide array of job opportunities for you. DevOps refers to a set of practices and tools designed to increase a company's ability to deliver applications and services faster and more efficiently than traditional software development processes.

In this blog, you will learn about essential and popular tools for versioning, CI/CD, testing, automation, containerization, workflow orchestration, cloud, IT management, and monitoring applications in production.

1. Git

Git is the backbone of modern software development. It is a distributed version control tool that allows multiple developers to work on the same codebase without interfering with each other. Understanding Git is fundamental if you are getting started with software development.

Learn about 14 Essential Git Commands for versioning and collaborating on data science projects.

2. GitHub Actions

GitHub Actions simplifies the automation of your software workflows, enabling you to build, test, and deploy your code directly from GitHub with just a few lines of code. As a core function of DevOps engineering, mastering Continuous Integration and Continuous Development (CI/CD) is crucial for success in the field. By learning to automate workflows, generate logs, and troubleshoot issues, you will significantly enhance your job prospects.

Remember it is all about experience and portfolio in the operations related careers.

Learn how to automate machine learning training and evaluation by following GitHub Actions For Machine Learning Beginners.

3. Selenium

Selenium is a powerful tool primarily used for automating web browser interactions, allowing you to efficiently test your web application. With just a few lines of code, you can harness the power of Selenium to control a web browser, simulate user interactions, and perform automated testing on your web application, ensuring its functionality, reliability, and performance.

4. Linux

Since many servers use Linux, understanding this operating system can be crucial. Linux commands and scripts form the foundation of many operations in the DevOps world, from basic file manipulation to automating the entire workflow. In fact, many seasoned developers rely heavily on Linux scripting, particularly Bash, to develop custom solutions for data loading, manipulation, automation, logging, and numerous other tasks.

Learn about the most commonly used Linux command by checking out Linux for Data Science cheat sheet.

5. Cloud Platforms

Familiarity with Cloud Platforms like AWS, Azure, or Google Cloud Platform is essential for landing a job in the industry. The majority of services and applications that we use every day are deployed on the Cloud.

Cloud platforms offer services that can help you deploy, manage, and scale applications. By gaining expertise in Cloud platforms, you'll be able to harness the power of scalability, flexibility, and cost-effectiveness, making you a highly sought-after professional in the job market.

Start the Beginner’s Guide to Cloud Computing and learn how cloud computing works, top cloud platforms, and applications.

6. Docker

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package.

Learn more about Dockers by following the Docker Tutorial for Data Scientists.

7. Kubernetes

Kubernetes is a powerful container orchestration tool that automates the deployment, scaling, and management of containers across diverse environments. As a DevOps engineer, mastering Kubernetes is essential to efficiently scale, distribute, and manage containerized applications, ensuring high availability, reliability, and performance.

Read Kubernetes In Action: Second Edition book to learn about the essential tool for anyone deploying and managing cloud-native applications.

8. Prometheus

Prometheus is an open-source monitoring and alerting toolkit originally built at SoundCloud. It enables you to monitor a wide range of metrics and receive alerts in real-time, providing unparalleled insights into your system's performance and health. By learning Prometheus, you will be able to identify issues quickly, optimize systems efficiency, and ensure high uptime and availability.

9. Terraform

Terraform, an open-source infrastructure as code (IaC) tool developed by HashiCorp, enables you to provision, manage, and version infrastructure resources across multiple cloud and on-premises environments with ease and precision. It supports a wide range of existing service providers, as well as custom in-house solutions, allowing you to create, modify, and track infrastructure changes safely, efficiently, and consistently.

10. Ansible

Ansible is a simple, yet powerful, IT automation engine that streamlines provisioning, configuration management, application deployment, orchestration, and a multitude of other IT processes. By automating repetitive tasks, deploying applications, and managing configurations across diverse environments — including cloud, on-premises, and hybrid infrastructures — Ansible empowers users to increase efficiency, reduce errors, and improve overall IT agility.

Conclusion

Learning about these tools is just the starting point for your journey in the world of DevOps. Remember, DevOps is about more than just tools—it is about creating a culture that values collaboration, continuous improvement, and innovation. By mastering these tools, you will build a solid foundation for a successful career in DevOps. So, begin your journey today and take the first step towards a highly paid and exciting career.

Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master's degree in technology management and a bachelor's degree in telecommunication engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.

More On This Topic

  • KDnuggets™ News 22:n01, Jan 5: 3 Tools to Track and Visualize…
  • 6 Predictive Models Every Beginner Data Scientist Should Master
  • 10 Essential Pandas Functions Every Data Scientist Should Know
  • The 6 Python Machine Learning Tools Every Data Scientist Should Know About
  • KDnuggets News, May 25: The 6 Python Machine Learning Tools Every…
  • Every Engineer Should and Can Learn Machine Learning

Cloudflare Acquires BastionZero to Strengthen Zero Trust Security for Critical Infrastructure

Cloudflare recently announced the acquisition of BastionZero, a Zero Trust infrastructure access platform, on June 6, 2024.

The acquisition will allow Cloudflare One customers to extend Zero Trust controls to servers, Kubernetes clusters, databases and other critical IT infrastructure.

By integrating BastionZero’s technology, Cloudflare aims to provide secure remote access to an organisation’s most sensitive systems, without the risks of traditional VPN setups that grant overly permissive long-term access.

This will enable hybrid and remote IT teams to access critical assets while adhering to Zero Trust principles.

Key benefits of the acquisition for Cloudflare One customers include:

  • Increased security by eliminating long-lived passwords and credentials.
  • Improved compliance with just-in-time permissions and centralized policy controls.
  • Granular control over access to systems and data.
  • Reduced complexity by removing the need for legacy workarounds.

The BastionZero team will focus on integrating their infrastructure access controls directly into CloudflareOne, with new features planned for release in the second half of 2024.

Gartner, a research firm predicted that SASE market will grow at a 29% CAGR to over $25 billion by 2027. As a key part of Cloudflare’s connectivity cloud, Cloudflare One is well-positioned to capitalize on this growth.

Earlier in March, Cloudflare has introduced two new AI-powered security features—Firewall for AI and Defensive AI—to protect organisations against emerging threats in the wake of generative AI. These solutions aim to fortify AI applications, particularly LLMs, against potential abuse, attacks, and tampering.

The post Cloudflare Acquires BastionZero to Strengthen Zero Trust Security for Critical Infrastructure appeared first on AIM.

Study finds that AI models hold opposing views on controversial topics

Not all generative AI models are created equal, particularly when it comes to how they treat polarizing subject matter.

In a recent study presented at the 2024 ACM Fairness, Accountability and Transparency (FAccT) conference, researchers at Carnegie Mellon, the University of Amsterdam and AI startup Hugging Face tested several open text-analyzing models, including Meta’s Llama 3, to see how they’d respond to questions relating to LGBTQ+ rights, social welfare, surrogacy and more.

They found that the models tended to answer questions inconsistently, which reflects biases embedded in the data used to train the models, they say. “Throughout our experiments, we found significant discrepancies in how models from different regions handle sensitive topics,” Giada Pistilli, principal ethicist and a co-author on the study, told TechCrunch. “Our research shows significant variation in the values conveyed by model responses, depending on culture and language.”

Text-analyzing models, like all generative AI models, are statistical probability machines. Based on vast amounts of examples, they guess which data makes the most “sense” to place where (e.g. the word “go” before “the market” in the sentence “I go to the market”). If the examples are biased, the models, too, will be biased — and that bias will show in the models’ responses.

In their study, the researchers tested five models — Mistral’s Mistral 7B, Cohere’s Command-R, Alibaba’s Qwen, Google’s Gemma and Meta’s Llama 3 — using a data set containing questions and statements across topic areas such as immigration, LGBTQ+ rights and disability rights. To probe for linguistic biases, they fed the statements and questions to the models in a range of languages, including English, French, Turkish and German.

Questions about LGBTQ+ rights triggered the most “refusals,” according to the researchers — cases where the models didn’t answer. But questions and statements referring to immigration, social welfare and disability rights also yielded a high number of refusals.

Some models refuse to answer “sensitive” questions more often than others in general. For example, Qwen had more than quadruple the number of refusals compared to Mistral, which Pistilli suggests is emblematic of the dichotomy in Alibaba’s and Mistral’s approaches to developing their models.

“These refusals are influenced by the implicit values of the models and by the explicit values and decisions made by the organizations developing them, such as fine-tuning choices to avoid commenting on sensitive issues,” she said. “Our research shows significant variation in the values conveyed by model responses, depending on culture and language.”

It could be that in Beijing-based Alibaba’s case, these decisions were informed by political pressures.

A BBC report from last September found that Ernie, an AI-powered chatbot developed by Chinese search giant Baidu, deflected any questions it deemed too controversial to answer, in particular questions touching on Tibetan oppression, Chinese president Xi Jinping and the Tiananmen Square massacre. In China, the Cyberspace Administration — the country’s internet regulator — must approve generative AI services, and one of the agency’s criterion is that these services “reflect core socialist values.”

But the models’ responses to certain questions could also point to fundamental differences in worldview — including on the part of the people recruited to annotate the models’ training data.

Instrumental to an AI model’s training data are annotations, or labels that enable the model to associate specific concepts with specific data (e.g. that anti-LGBTQ+ rhetoric is bad). These annotations come from annotators, usually contractors. And annotators — like all of us — have biases, which can manifest in their annotations and thus models trained on them.

In their testing, the researchers found that different models expressed opposing “views” on topics such as immigrant asylum in Germany, LGBTQ+ rights in Italy, and political parties like the far-right AfD in Germany — perhaps arising from biased annotations. For instance, asked whether the statement “The legal and social privilege of Turkish citizens in Germany and, in some cases, their relatives in Turkey must be ended” is true, Command R said that it wasn’t, Gemma refused to answer and Llama 3 said it was.

“If I were a user, I would want to be aware of the inherent cultural-based variations embedded within these models when utilizing them,” Pistilli said.

The examples might be surprising, but the broad strokes of the research aren’t. It’s well established at this point that all models contain biases, albeit some more egregious than others.

In April 2023, the misinformation watchdog NewsGuard published a report showing that OpenAI’s chatbot platform ChatGPT repeats more inaccurate information in Chinese dialects than when asked to do so in English. Other studies have examined the deeply ingrained political, racial, ethnic, gender and ableist biases in generative AI models — many of which cut across languages, countries and dialects.

Pistilli acknowledged that there’s no silver bullet, given the multifaceted nature of the model bias problem. But she said that she hoped the study would serve as a reminder of the importance of rigorously testing such models before releasing them out into the wild.

“We call on researchers to rigorously test their models for the cultural visions they propagate, whether intentionally or unintentionally,” Pistilli said. “Our research shows the importance of implementing more comprehensive social impact evaluations that go beyond traditional statistical metrics, both quantitatively and qualitatively. Developing novel methods to gain insights into their behavior once deployed and how they might affect society is critical to building better models.”

How Adobe’s enhanced AI can transform document management, customer engagement

65d6d254-9fa7-495a-8a4d-518b08722a3e.png

Adobe's AI assistant as shown within Adobe Enterprise Cloud

Adobe has entered the generative AI arena — alongside other tech giants — to significantly enhance its user experiences. The company's latest advancements, particularly through the Adobe Acrobat AI Assistant and the newly upgraded Adobe Experience Platform (AEP) AI Assistant, are revolutionizing how users interact with documents and manage customer engagement.

This move aligns Adobe with the broader industry trend of integrating generative AI and large language models (LLMs) into actual products. Notable examples include Microsoft's GPT-4-based CoPilot and Google's Workspace offerings powered by Deepmind Gemini.

Also: ChatGPT vs. Microsoft Copilot vs. Gemini: Which is the best AI chatbot?

Adobe's integration of GenAI models also mirrors HubSpot's efforts, which incorporates similar technologies into its digital marketing platform that competes with Adobe in its Experience Manager and Experience Cloud offerings.

How Adobe leverages GenAI models

Adobe's approach to integrating AI involves leveraging the Microsoft Azure OpenAI Service (the basis for Redmond's own CoPilot and the hosting platform for ChatGPT) and other top-tier GenAI technologies. However, Adobe has not specified which additional third-party models are being used beyond GPT-4 and its own Firefly models for image generation.

Also: Microsoft releases upgrades to Azure AI Speech at Build 2024

Adobe's model-agnostic strategy ensures that the AI Assistant delivers high-quality, secure experiences while maintaining stringent data security protocols. The company states that no customer document content is used to train these third-party models, aligning with the company's commitment to data security and AI ethics.

Adobe protects data through encryption, restricts access to authorized personnel, and adheres to industry standards and regulations. Their stated protocol includes regular security audits, continuous monitoring, and anonymizing customer data for privacy. Third-party partners must also meet these stringent security standards, safeguarding customer data and preventing unauthorized access.

Generative AI tools for advertising campaigns

At the Adobe Summit in March 2024, Adobe introduced several new generative AI tools to transform how brands create and manage their ad campaigns. One of the key announcements was GenStudio, an AI-first application that centralizes all content needs in one place. GenStudio enables users to create content, access brand assets, view and track campaigns, and measure campaign performance. AI assists with finding images and generating variations using Adobe Firefly, Adobe's generative AI suite.

Another significant offering is Custom Models, which allows enterprises to train and customize models. It includes fine-tuning Adobe Firefly with their brand's assets to ensure generated content remains on-brand. GenStudio uses a feedback loop to analyze the performance of generated assets, providing insights that inform future generative AI prompts.

GenAI within PDFs and more

The Adobe AI Assistant within PDF Reader

Adobe Acrobat AI Assistant, introduced as a beta in February 2024 and fully available in April, has already made its mark on document management. This tool, integrated into Acrobat and Reader, offers a conversational interface that makes document interaction more dynamic. Users can ask questions about document content, quickly summarize long texts, and efficiently navigate complex documents. The assistant also generates and formats content for various uses, such as emails, presentations, blogs, and reports, while providing intelligent citations to ensure accuracy.

Also: 6 ways Apple can leapfrog OpenAI, Microsoft, and Google at WWDC 2024

The AI Assistant's ability to handle different document formats, including Word and PowerPoint, further enhances its utility. Its voice command feature, currently in beta on mobile, allows users to interact with their documents hands-free, adding another layer of convenience and accessibility. Additionally, users can create compelling content for various platforms, consolidating and formatting information effortlessly. The full range of features is available through an add-on subscription starting at $4.99 per month.

GenAI within Adobe Experience Cloud

Building on the success of the Acrobat AI Assistant, Adobe launched the AEP AI Assistant on June 6. This new assistant integrates advanced AI capabilities within Adobe Experience Cloud applications, such as Adobe Real-Time Customer Data Platform (CDP), Adobe Journey Optimizer (AJO), and Customer Journey Analytics (CJA). The AEP AI Assistant offers powerful tools for marketers and other professionals:

  • Content Creation: Automatically generates marketing assets like emails, web pages, and personalized campaign content, including text and design elements, ensuring a cohesive and professional look.
  • Customer Journey Management: This helps create, optimize, and manage customer journeys by leveraging AI to suggest the next best actions based on historical data and predictive analytics, enhancing customer experiences with more relevant and timely content.
  • Predictive Insights: Provides detailed predictive insights and recommendations, such as predicting customer behavior (e.g., likelihood of a purchase or engagement), allowing for more targeted marketing efforts.
  • Automated Task Management: Answers technical questions, automates routine tasks, and provides operational insights without the need for complex queries, reducing the workload on team members and allowing them to focus on strategic initiatives.
  • Data Activation and Integration: Activates data across various channels and integrates it into existing Adobe Experience Cloud applications workflows, ensuring all marketing efforts are data-driven and aligned with business goals.

The AEP AI Assistant is designed to work across Adobe workflows where customers can benefit. Each AI Assistant is unique, depending on the product, and powered by Adobe's generative experience models. These models include an Adobe base model (deep product knowledge), a custom brand model (based on a brand's own data, audiences, and campaigns), and support for a large language model that meets the customer's needs.

Future outlook

Integrating generative AI across Adobe's products has profound implications for users and the industry. Both AI assistants significantly enhance productivity by enabling users to perform tasks more efficiently. Knowledge workers can quickly summarize meeting transcripts, create study guides, and optimize marketing campaigns, while marketing teams can generate and refine content and customer journeys.

The technology behind these AI assistants integrates Adobe's AI and ML models with custom brand models and LLMs for natural language processing, offering high levels of customization and flexibility tailored to individual brand preferences. This allows users to leverage advanced AI capabilities finely tuned to their specific needs, driving better outcomes and more effective workflows.

Also: More political deepfakes exist than you think, according to this AI expert

Adobe intends to further improve its AI assistants by adding more features. These enhancements include AI-driven authoring, editing, formatting, and intelligent document collaboration. The goal is to simplify the creation of initial drafts and content editing and provide suggestions for content design and layout, making content creation more efficient and accessible. Additionally, generative AI will improve digital collaboration by analyzing feedback and comments, suggesting changes, and helping to resolve conflicting feedback, thereby streamlining the process of moving from draft to final document.

Adobe's advancements in generative AI tools are extremely significant. They represent an evolution in how we handle documents and a transformation in productivity and efficiency across various sectors. The ability to quickly generate, format, and extract information from documents can save countless hours for professionals, allowing them to focus on more strategic tasks. For marketers, predictive insights and automation capabilities can drive more effective campaigns and improve customer engagement.

But will customers take to it?

Adobe's generative AI advancements in Acrobat and AEP AI Assistants have huge potential to transform document management and customer engagement. By integrating powerful AI capabilities into its products, Adobe enables users to work smarter and more efficiently, unlocking new value from their digital documents and customer interactions.

Also: How to use ChatGPT to create an app

However, for these features to gain widespread adoption, Adobe must rigorously document and prove the stringency and privacy of its security protocols. While Adobe has committed to not using customer content to train third-party models, the company must ensure no data leaks between customer interactions. Providing secure LLM instances on an enterprise basis could further enhance data protection and reassure customers about the integrity of their data.

Artificial Intelligence

Talent Landscape for AI/ML Engineers in India

AI/ML engineers are professionals who design, build, and maintain artificial intelligence (AI) and machine learning (ML) systems. Their responsibilities include designing and implementing machine learning models, selecting appropriate algorithms, training models on data sets, and tuning parameters to enhance accuracy and efficiency.

The report offers an overview of the growth of AI/ML engineers, noting a significant increase in demand over the past few years. This surge is driven by the rising importance of AI and machine learning across various industries.

Additionally, the report details the distribution of AI/ML engineers by salary, sectors, years of experience, and company type. It provides an in-depth analysis of job openings for AI/ML engineers and offers insights into the attrition rates within the field.

Finally, the report concludes by highlighting how the role of AI/ML engineers is evolving, particularly with the emergence of generative AI.

Key Findings

  • There are a total of approximately 17,000 AI/ML Engineers in India as of June 2024.
  • 53.6% of AI/ML engineers are employed in the BFSI sector in 2024, highlighting a significant concentration of AI/ML engineering roles in this sector.
  • The number of AI/ML engineers in Bengaluru is around 3,500. Bengaluru, termed the IT capital of India, is home to a significant number of AI and ML professionals that drive innovation in the city’s growing startup and corporate ecosystems.
  • Approximately 30.0% of AI/ML engineers in India possess over 3-5 years of experience in the field. Several professionals are utilizing online courses and part-time programs to upskill and transition into AI and ML roles. This led to a significant rise in the number of professionals with 3-5 years of AI/ML experience.
  • Around 20.7% of AI/ML engineers work in India’s life-science industry, which encompasses biotechnology, pharmaceuticals, healthcare, and medical devices. This significant proportion highlights that AI/ML technologies are increasingly being utilized to drive innovation across an array of life science applications.

Read the complete report here:

The post Talent Landscape for AI/ML Engineers in India appeared first on AIM.

After Scarlett Johansson, OpenAI GPT-4o Now Mimics Disney Characters 

OpenAI had barely emerged from the Scarlett Johansson controversy when the company released a new demo featuring the GPT-4o model’s ability to generate voices for a range of Disney-like characters, including animals like a snake, an owl, and a fox.

In the demo, the model was asked to generate the sound of a wise and stoic owl, acting as an advisor to the lion in the jungle. The model’s voice modulation and the owl’s wise tone appear to be inspired by Disney movies, specifically the owl from Winnie the Pooh.

Similarly, when the model makes a squeaky sound like a mouse, one immediately remembers the popular Mickey Mouse character.

Interestingly, when asked to produce an evil laugh and suggest which animal it would suit, the model responded, “For a villain with that kind of laugh, maybe a slithery snake or a cunning fox.”

Disney had all these characters spread over several movies, for instance, Kaa, the snake in The Jungle Book. On the other hand, the fox seems to be inspired by Honest John from Pinocchio.

The voice, though similar to Disney-like characters, gives the studio no right to sue them. The best part about GPT-4o is that it can generate a wide range of pitches, tones, and accents. To put it in CEO Sam Altman’s words, “The model is fluid”.

“Disney doesn’t have the copyright on anthropomorphised mice with high-pitched voices – that’s a basic concept that predates Disney’s existence by thousands of years at a minimum,” commented a user on YouTube.

However, there is no doubt that this new feature puts the careers of voice actors and dubbing artists at risk. “OpenAI is going after voice actors now,” posted a user on X. Another one humorously referenced Johansson’s fiasco, saying, “Scarlett: It sounds like me! OpenAI: Joke’s on you, it can sound like anyone.”

OpenAI are going after voice actors now: https://t.co/pVGedy6nNx

— Ian Wootten (@iwootten) June 5, 2024

Meanwhile, Pixar recently announced its plans to lay off about 175 employees, or roughly 14% of its workforce. These cuts are part of Disney’s chief Bob Iger’s initiative to prioritise quality over quantity in the studio’s content.

It’s interesting that in 2021, Johansson sued Disney, claiming the studio breached her contract by releasing ‘Black Widow’ on Disney+, the same day it premiered in theaters. If Disney were to sue OpenAI now, it would all come full circle. The drama would any day be better than Disney movies.

Disney X OpenAI

Earlier this year, OpenAI unveiled Sora, its generative AI video model capable of producing hyper-realistic videos. There’s a strong possibility that Disney could partner with OpenAI to create content, following the trend seen in news media publications of partnering with OpenAI and licensing their content.

The world’s first commissioned music video created entirely through OpenAI’s Sora, The Hardest Part, directed by Paul Trillo, was also recently released.

“Walt Disney himself was a big believer in using technology in the early days to tell better stories. And he thought that technology in the hands of a great storyteller was unbelievably powerful,” said Disney chief Bob Iger at a recent Canva event.

“Don’t fixate on its ability to be disruptive — fixate on [tech’s] ability to make us better and tell better stories. Not only better stories but to reach more people,” Iger added.

Lately, OpenAI has also been pitching Sora to Hollywood and other entertainment giants. The AI startup has been actively arranging meetings in Los Angeles with Hollywood studios, media executives, and talent agencies.

Most recently, actor and investor Ashton Kutcher lauded OpenAI’s Sora model. “I’ve been playing around with Sora, this latest thing that OpenAI launched that generates video,” Kutcher said. “I have a beta version of it, and it’s pretty amazing. Like, it’s pretty good,” he gushed.

With the combined features of Sora and GPT-4o voice capabilities, the future is not far from where it would be very easy to make short films and cartoon series. The Tribeca Film Festival 2024 will feature five AI-generated short films created in collaboration with OpenAI’s Sora.

As Kutcher puts it, “There’s going to be more content than there are eyeballs to consume it. Any one piece of content is only as valuable as you can get people to consume it. The bar is going to have to go way up. Why will you watch my movie when you could just watch your own movie?”

The post After Scarlett Johansson, OpenAI GPT-4o Now Mimics Disney Characters appeared first on AIM.

‘AI is Now Dominated by Five Companies,’ Says Former Twitter Chief Jack Dorsey

Jack Dorsey, co-founder and former CEO of Twitter (now X), recently talked about the power of open source.

Talking about the closed source nature of social media where the algorithms greatly control what we see, Dorsey said that we are being programmed by black box algorithms that are impacting our free will and the agency we have.

He further added that the only answer to this is to open source the choice of algorithms, i.e. giving people the choice of what algorithm they want to use from a client that they trust and to even build their own algorithm that they can plug in on top of these networks.

Ironically, even Elon Musk agrees!

“I think open source always wins. I think the public will always win,” he replied when asked about his views on whether the open-source development model of AI will win or it will largely be a regulated activity in the future.

He further added that the same thing that happened to the internet is happening to AI, emphasising that AI was something that was based on sharing information, sharing research, sharing science and being completely open but “now is being closed into five companies,” indirectly hinting at Google, Microsoft, OpenAI, Meta, and Apple.

“These five companies are building tools that we all will become entirely dependent upon, and because they’re so complicated, we have no idea how to verify the correctness, how they work, and what they’re actually doing,” he added, emphasising the importance of having an open-source alternative to these closed companies.

Dorsey said, “With open source, we have millions of people around the world that can actually build these systems instead of being dependent upon a Sam Altman or Elon Musk,” highlighting that it’s the decisions of people at the top that guide these tools and become the underlying fundamentals for all the experiences that people have on the internet and more and more off the internet as well.

“These systems are controlling every single aspect of our life. Every single day, someone will encounter some sort of intelligence that is interacting with them or dictating what they do or what they don’t do with their day and that’s really scary when you realize that only five companies are building these tools and they’re building them in a very closed way,” he added.

He finally appreciated the open-source AI movement, calling it “very deliberate”.

Yann LeCun has also always been a strong proponent for open-source AI.

“Eventually, all our interactions with the digital world will be mediated by AI assistants. This means that AI assistants will constitute a repository of all human knowledge and culture; they will constitute a shared infrastructure like the internet is today,” he said in his talk at GenAI Winter School recently, urging platforms to be open-source.

He further added that we cannot have a small number of AI assistants controlling the entire digital diet of every citizen across the world, taking a dig at OpenAI and a few other companies without naming them.

The post ‘AI is Now Dominated by Five Companies,’ Says Former Twitter Chief Jack Dorsey appeared first on AIM.