Google Unveils Open Vision Language Model, PaliGemma 

At Google I/O, the tech giant today introduced PaliGemma, a powerful open vision-language model (VLM), and provided a sneak peek into the upcoming Gemma 2, the next generation of their Gemma family of models.

PaliGemma, inspired by PaLI-3 and built on open components, including the SigLIP vision model and the Gemma language model, is designed for class-leading fine-tune performance on a wide range of vision-language tasks.

These tasks include image and short video captioning, visual question answering, understanding text in images, object detection, and object segmentation.

Google is providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration.

PaliGemma is available through various platforms and resources, including free options like Kaggle and Colab notebooks, and academic researchers can apply for Google Cloud credits to support their work.

The release of PaliGemma brings several key benefits, such as multimodal comprehension, a versatile base model for fine-tuning on a wide range of vision-language tasks, and off-the-shelf exploration with a checkpoint fine-tuned on a mixture of tasks for immediate research use.

Several have started experimenting with it already.

I tried it with some plant disease images. It could identify the crop, but it would refuse to detect plant diseases. Found this example quite funny: pic.twitter.com/bASP74bgMn

— Thomas Friedel (@thomascygn) May 14, 2024

Project Navarasa Takes Center Stage at Google I/O

Just a few days ago, we wrote about how Gemma outperformed Meta’s Llama 3 for Indic languages. Today, at Google I/O, India’s Project Navarasa took centre stage, highlighting the use of Gemma, making it accessible for 15 Indic languages.

Google highlighted the success of ‘Project Navarasa,’ a multilingual variant of Gemma for Indic languages developed by Telugu LLM Labs.

Harsh Dhand, head of APAC research partnerships at Google said, “When technology is developed for a particular culture, it won’t be able to solve and understand the nuances of a country like India.”

Project Navarasa leverages Gemma’s powerful tokenizer to enable AI-driven language generation for 15 Indic languages.

“One of Gemma’s features is an incredibly powerful tokenizer which enables the model to use hundreds of thousands of words, symbols and characters across so many alphabets and language systems. This large vocabulary is critical to adapting Gemma to power projects like Navarasa,” said Ramsri Goutham Golla, the co-creator of Navarasa.

“Our biggest dream is to build a model to include everyone from all corners of India,” said Golla, saying that Navarasa is a model trained for Indic languages, and a fine-tuned model based on Google’s Gemma.

He said they built Navarasa to create culturally rooted large language models where people can talk in their native language and receive responses in their native language.

Many developers that AIM spoke to said that Gemma is better than Llama for Indic languages. “Gemma shines compared to the Llama 2 and 3 models,” said Adithya S Kolavi, founder of Cognitive Lab, who built a leaderboard for Indic LLMs.

“Models using Llama 2 extended its tokenizer by 20 to 30k tokens, reaching a vocabulary size of 50-60k. Continuous pre-training is crucial for understanding these new tokens. In contrast, Gemma’s tokenizer initially handles Indic languages well, requiring minimal fine-tuning for specific tasks,” explained Kolavi.

According to Vivek Raghavan, the co-founder of Sarvam AI, Gemma’s powerful tokenizer gives it an advantage over Llama when it comes to Indic Languages. He explained, “The tokenization tax for Indic languages means asking the same question in Hindi costs three times more tokens than in English, and even more for languages like Odiya due to their underrepresentation in these models.”

Meanwhile, OpenAI recently released GPT-4o, an update to their language model that includes a new tokenizer and an extended vocabulary size of 200k tokens, compared to 100k tokens in GPT-4.

This update significantly improved the support for several Indian languages, including Hindi, Gujarati, Marathi, Telugu, Tamil, and Urdu.

Although Gemma 2’s tokenizer limit wasn’t clearly mentioned in the demo, it is stated that the model can handle ‘hundreds of thousands of words, symbols and characters’. In comparison, GPT-4o’s 200k base tokenizer so far outperforms Gemma for Indic and non-English languages in terms of token reduction.

More Power to Gemma

Looking ahead, Google announced the upcoming arrival of Gemma 2, the next generation of Gemma models. Gemma 2 will be available in new sizes for a broad range of AI developer use cases and features a brand-new architecture designed for breakthrough performance and efficiency. Key benefits include class-leading performance, reduced deployment costs, and versatile tuning toolchains.

Now you can try out our Indic Gemma Model Navarasa 2.0 (supports language generation in 15 languages) easily as a chat interface at https://t.co/KFQ6qfWBf0
Ask a question in English and ask it to respond in Hindi, Telugu etc or ask directly in the native language.
Kudos to… pic.twitter.com/YuHniHo5s4

— Ramsri Goutham Golla (@ramsri_goutham) April 2, 2024

At this year’s developer conference, Google literally poked fun at OpenAI by making it clear that it is making AI helpful for everyone, not just him or her.

The post Google Unveils Open Vision Language Model, PaliGemma appeared first on Analytics India Magazine.

Google Search’s AI Summaries Are Generally Available This Week

AI Overviews, the next evolution of Search Generative Experience, will roll out in the U.S. this week and in more countries soon, Google announced at the Shoreline Amphitheater in Mountain View, CA. Google showed several other changes coming to Google Cloud, Gemini, Workspace and more, including AI actions and summarization that can work across apps — opening up some interesting options for small businesses.

Search will include AI Overviews

AI Overviews is the expansion of Google’s Search Generative Experience, the AI generated answers that appear at the top of Google searches. You may have seen SGE in action already, as select U.S. users have been able to try it since last October. SGE can generate images or text. AI Overviews adds AI-generated information to the top of any Google Search Engine results.

With AI Overviews, “Google does the work for you. Instead of piecing together all the information yourself, you can ask your questions” and “get an answer instantly,” said Liz Reid, Google’s vice president of Search.

By the end of the year, AI Overviews will come to over a billion people, Reid said. Google wants to be able to answer “ten questions in one,” linking tasks together so that the AI can make accurate connections between information. This is possible through multi-step reasoning. For example, someone could ask not only for the best yoga studios in the area, but also for the distance between the studios and their home and the studios’ introductory offers. All of this information will be listed in convenient columns at the top of the Search results.

Soon, AI Overviews will be able to answer questions about videos provided to it, too.

AI Overviews is rolling out in “the coming weeks” in the U.S. and will be available in Search Labs first.

Does AI Overviews actually make Google Search more useful? Google says it will carefully note which images are AI generated and which come from the web, but AI Overviews may dilute Search’s usefulness if the AI answers prove incorrect, irrelevant or misleading.

Gemini 1.5 Pro gets some upgrades, including a 2 million context window for select users

Google’s large language model Gemini 1.5 Pro is getting quality improvements and a new version, Gemini 1.5 Flash. New features for developers in the Gemini API include video frame extraction, parallel function calling, and context caching for developers. Native video frame extraction and parallel function calling are available now. Context caching is expected to drop in June.

Available today globally, Gemini 1.5 Flash is a smaller model focused on responding quickly. Users of Gemini 1.5 Pro and Gemini 1.5 will be able to input information for the AI to analyze in a 1 million context window.

On top of that, Google is expanding Gemini 1.5 Pro’s context window to 2 million for select Google Cloud customers. To get the wider context window, join the waitlist in Google AI Studio or Vertex AI.

The ultimate goal is “infinite context,” Google CEO Sundar Pichai said.

Gemma 2 comes in 27B parameter size

Google’s small language model, Gemma, will get a major overhaul in June. Gemma 2 will have a 27B parameter model, in response to developers requesting a bigger Gemma model that is still small enough to fit inside compact projects. Gemma 2 can run efficiently on a single TPU host in Vertex AI, Google said. Gemma 2 will be available in June.

Plus, Google rolled out PaliGemma, a language and vision model for tasks like image captioning and asking questions based on images. PaliGemma is available now in Vertex AI.

Gemini summarization and other features will be attached to Google Workspace

Google Workspace is getting several AI enhancements, which are enabled by Gemini 1.5’s long context window and multimodality. For example, users can ask Gemini to summarize long email threads or Google Meet calls. Gemini will be available in the Workspace side panel next month on desktop for businesses and consumers who use the Gemini for Workspace add-ons and the Google One AI Premium plan. The Gemini side panel is now available in Workspace Labs and for Gemini for Workspace Alpha users.

Workspace and AI Advanced customers will be able to use some new Gemini features going forward, starting for Labs users this month and generally available in July:

  • Summarize email threads.
  • Run a Q&A on your email inbox.
  • Use longer suggested replies in Smart Reply to draw contextual information from email threads.

Gemini 1.5 can make connections between apps in Workspace, such as Gmail and Docs. Google Vice President and General Manager for Workspace Aparna Pappu demonstrated this by showing how small business owners could use Gemini 1.5 to organize and track their travel receipts in a spreadsheet based on an email. This feature, Data Q&A, is rolling out to Labs users in July.

Next, Google wants to be able to add a Virtual Teammate to Workspace. The Virtual Teammate will act like an AI coworker, with an identity, a Workspace account and an objective. (But without the need for PTO.) Employees can ask questions about work to the assistant, and the assistant will hold the “collective memory” of the team it works with.

The virtual teammate has a Workplace account and a profile. Users can set specific objectives for the AI in the profile. Image: Google

Google hasn’t announced a release date for Virtual Teammate yet. They plan to add third-party capabilities to it going forward. This is just speculative, but Virtual Teammate might be especially useful for business if it connects to CRM applications.

Voice and video capabilities are coming to the Gemini app

Speaking and video capabilities are coming to the Gemini app later this year. Gemini will be able to “see” through your camera and respond in real time.

Users will be able to create “Gems,” customized agents to do things like act as personal writing coaches. The idea is to make Gemini “a true assistant,” which can, for example, plan a trip. Gems are coming to Gemini Advanced this summer.

The addition of multimodality to Gemini comes at an interesting time compared to the demonstration of ChatGPT with GPT-4o earlier this week. Both showed very natural-sounding conversation. OpenAI’s AI voice responded to interruption, but mis-read or mis-interpreted some situations.

SEE: OpenAI showed off how the newest iteration of the GPT-4 model can respond to live video.

Imagen 3 improves at generating text

Google announced Imagen 3, the next evolution of its image generation AI. Imagen 3 is intended to be better at rendering text, which has been a major weakness for AI image generators in the past. Select creators can try Imagen 3 in ImageFX at Google Labs today, and Imagine 3 is coming soon for developers in Vertex AI.

Google and DeepMind reveal other creative AI tools

Another creative AI product Google announced was Veo, their next-generation generative video model from DeepMind. Veo created an impressive video of a car driving through a tunnel and onto a city street. Veo can be used by select creators in VideoFX, an experimental tool found at labs.google.

Other creative types might want to use the Music AI Sandbox, a set of generative AI tools for making music. Neither public nor private release dates for Music AI Sandbox have been announced.

Sixth-generation Trillium GPUs boost the power of Google Cloud data centers

Pichai introduced Google’s 6th generation Google Cloud TPUs, called Trillium. Google claims the TPUs show a 4.7X improvement over the previous generation. Trillium TPUs are intended to add greater performance to Google Cloud data centers, and compete with NVIDIA’s AI accelerators. Time on Trillium will be available to Google Cloud customers in late 2024. Plus, NVIDIA’s Blackwell GPUs will be available in Google Cloud starting in 2025.

TechRepublic covered Google I/O remotely.

Claude is Finally Available to Users in the EU

Anthropic announced the release of Claude to its user base, both individuals and businesses, in the EU on Tuesday.

This is another step towards Anthropic focusing on its EU customers, as the company released its Claude API in Europe earlier this year.

The AI chatbot will be accessible on desktop as well as on their newly launched iOS app. Users from the EU will be able to access both the free and paid versions of Claude with a subscription cost of €18, excluding VAT.

The EU release comes after the region released a set of regulations earlier this year governing AI. In accordance with this, Anthropic made sure to emphasise a focus on privacy and security.

Alongside the EU release, the company also announced an update to their Terms of Service. Anthropic highlighted policy refinements, high-risk use cases and certain disclosure requirements within their usage policy, possibly to align with the regulations put forth by the EU.

“We’ve refined and restructured our policy to give more details about the individuals and organisations covered by our policies. We’ve broken out some specific “high-risk use cases” that have additional requirements due to posing an elevated risk of harm. We added new disclosure requirements so that organisations who use our tools also help their own users understand they are interacting with an AI system,” the company said.

Additionally, while a data retention policy was not specified prior, the default data retention period has been updated to 30 days.

In terms of what’s on the table with this development, businesses in the EU will also have access to Claude Team, a new plan offered by the company specifically for workplaces, with full access to Opus, Sonnet and Haiku, and Claude Pro. The Team plan also includes tools for admin, billing management and document processing.

The Team plan was also launched alongside the Claude iOS app earlier this month. Likewise, its subscription costs amount to $30 per user, or €28 plus VAT in the EU.

The post Claude is Finally Available to Users in the EU appeared first on Analytics India Magazine.

Empowering businesses to make informed decisions

1691800586207

In today’s data-driven business landscape, data quality enables organizations to make informed decisions. Data quality tools are pivotal in ensuring that businesses have access to accurate, reliable, and consistent data. By leveraging these tools, organizations can enhance their decision-making processes, improve operational efficiency, and establish a competitive advantage in the market. This piece will explore the significance of data quality tools in empowering businesses to make informed decisions, highlighting their critical role in maintaining data integrity and enabling actionable insights.

Understanding data quality tools

Data quality tools are instrumental in maintaining the integrity and reliability of data within organizations. These tools encompass a range of functionalities aimed at identifying, correcting, and preventing errors or inconsistencies in data. From data profiling to cleansing and deduplication, data quality tools offer various mechanisms to ensure that data meets predefined quality standards. By analyzing data patterns, detecting anomalies, and enforcing validation rules, these tools help organizations improve the accuracy and usability of their data assets. Data quality tools are the cornerstone of effective data management strategies, enabling businesses to leverage high-quality data for making informed decisions.

Importance of data quality for informed decision-making

Data quality is paramount for organizations striving to make informed decisions. High-quality data guarantees that the insights from data analysis are accurate and reliable, thereby enabling confident decision-making. With reliable data, organizations can avoid making decisions relying on inaccurate or incomplete data, resulting in costly errors and overlooked opportunities. Data quality tools play a crucial role in maintaining data integrity by identifying and rectifying mistakes, inconsistencies, and redundancies in data. These tools empower businesses to make decisions confidently, drive operational efficiency, and achieve strategic objectives by ensuring data accuracy, consistency, and currency. Data quality is the foundation upon which informed decision-making is built, and data quality tools are indispensable assets in this endeavor.

Key features and functionality of data quality tools

Data quality tools provide a variety of features and functionalities designed to enhance the accuracy and reliability of data. Some key features include:

  • Data Profiling: Tools analyze data structure, content, and quality to identify anomalies and inconsistencies.
  • Data Cleansing: Tools automatically detect and correct errors, such as misspellings, duplicates, and formatting issues, ensuring data accuracy.
  • Deduplication: Tools identify and remove duplicate records from datasets, eliminating redundancies and ensuring data consistency.
  • Validation Rules: Tools enforce predefined validation rules to ensure data conforms to specified criteria, such as data type, format, and range.
  • Anomaly Detection: Tools employ algorithms to detect unusual patterns or outliers in data, flagging potential errors or anomalies for further investigation.
  • Data Monitoring: Tools continuously monitor data quality metrics and alert users to deviations from established thresholds, facilitating proactive data management.

These features enable organizations to maintain high-quality data, empowering them to make insightful decisions founded on accurate and reliable information.

Enhancing business processes with data quality tools

Data quality tools are crucial in optimizing various business processes across organizations. Here’s how they strengthen efficiency and effectiveness:

  • Improved Decision-Making: These tools empower organizations to make knowledgeable decisions, driving strategic initiatives and operational efficiency. Also, ensures data accuracy and reliability.
  • Enhanced Customer Experience: Data quality tools help maintain clean and up-to-date customer data. It leads to personalized experiences, targeted marketing campaigns, and improved customer satisfaction.
  • Streamlined Operations: By automating data cleansing and validation processes. These tools streamline operations, reduce manual effort, and minimize the risk of errors in critical workflows.
  • Compliance and Risk Management: Data quality tools help organizations ensure regulatory compliance. By maintaining data accuracy, integrity, and confidentiality, thus mitigating risks associated with non-compliance and data breaches.

Data quality tools are indispensable assets for businesses seeking to optimize processes, drive innovation, and maintain a competitive edge in today’s data-driven environment.

Challenges and considerations in implementing data quality tools

Implementing data quality tools may pose several challenges and considerations for organizations:

  • Integration Complexity: Integrating data quality tools with current systems and workflows can be intricate and time-intensive. It requires careful planning and coordination.
  • Resource Constraints: Limited resources, such as budget, expertise, and time. It may hinder the successful implementation and utilization of data quality tools within organizations.
  • Change Management: Resistance to change and lack of organizational buy-in may impede the adoption and effectiveness of data quality initiatives. Necessitating robust change management strategies.

Conclusion

In conclusion, data quality tools are indispensable assets for businesses striving to make well-judged decisions and drive success. By ensuring data integrity, accuracy, and reliability, these tools empower organizations to optimize processes, enhance customer experiences, and mitigate risks. Despite challenges in implementation, the benefits of data quality tools far outweigh the obstacles. As organizations prioritize data-driven decision-making. Investing in robust data quality solutions becomes imperative for staying competitive and achieving long-term success in today’s dynamic business landscape.

LearnLM is Google’s new family of AI models for education

Google says it’s developed a new family of generative AI models “fine-tuned” for learning: LearnLM. A collaboration between Google’s DeepMind AI research division and Google Research, LearnLM models — built on top of Google’s Gemini models — are designed to “conversationally” tutor students on a range of subjects, Google says. LearnLM is already powering features […]

© 2024 TechCrunch. All rights reserved. For personal use only.

HPE Delivers Second Exascale Supercomputer, Aurora

HPE to Accelerate AI Training with NVIDIA GH200

Hewlett Packard Enterprise has announced the delivery of the world’s second exascale supercomputer, Aurora, in collaboration with Intel for the United States Department of Energy’s Argonne National Laboratory.

Aurora has achieved 1.012 exaflops on 87% of the system, making it the world’s second-fastest supercomputer as verified by the TOP500 list of the most powerful supercomputers.

The supercomputer has topped the HPL Mixed Precision (MxP) Benchmark with 10.6 exaflops on 89% of the system.

Trish Damkroger, the senior vice president and general manager, HPC & AI Infrastructure Solutions at HPE said, “We are proud of the strong partnership with the U.S. Department of Energy, Argonne National Laboratory, and Intel to realise a system of this scale and magnitude.”

What this supercomputer does

An exascale computing system can process one quintillion operations per second, enabling solutions to some of humanity’s most complex problems. Aurora, built with the HPE Cray EX supercomputer, supports this scale with the largest deployment of HPE Slingshot, an Ethernet-based supercomputing interconnect.

This fabric connects Aurora’s 75,000 compute node endpoints, 2,400 storage and service network endpoints, and 5,600 switches, improving performance across Aurora’s 10,624 compute blades, 21,248 Intel Xeon CPU Max Series processors, and 63,744 Intel Data Center GPU Max units, making it one of the world’s largest GPU clusters.

Planned as an AI-capable system from inception, researchers can use generative AI models on Aurora to accelerate scientific discovery. Early AI-driven research on Aurora includes brain mapping, high energy particle physics, and machine-learning accelerated drug design and discovery.

Aurora, housed at the Argonne Leadership Computing Facility (ALCF), a part of the U.S. Department of Energy’s Office of Science user facility, is the product of a strong private-public partnership between HPE, Intel, the U.S. Department of Energy, and Argonne National Laboratory.

The supercomputer has reached exascale on a partial run, utilising 9,234 of the total nodes. It is an open science system housed at the Argonne Leadership Computing Facility, aiming to support groundbreaking scientific research.

In contrast, Recursion’s BioHive-2, powered by NVIDIA GPUs, was released yesterday and achieves two exaflops of AI performance, marking it as one of the top 35 supercomputers globally.

Unlike Aurora, which focuses broadly on scientific research, BioHive-2 is tailored for pharmaceutical research and development.

The post HPE Delivers Second Exascale Supercomputer, Aurora appeared first on Analytics India Magazine.

10 Must Watch OpenAI GPT-4o Demos 

At the OpenAI Spring Update, OpenAI CTO Mira Murati unveiled GPT-4o, a new flagship model that enriches its suite with ‘omni’ capabilities across text, vision, and audio, promising iterative rollouts to enhance both developer and consumer products in the coming weeks.

With GPT-4o, OpenAI trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. While introducing the model, OpenAI made several demonstrations to showcase its capabilities. Here, we have cherry-picked the top ones.

For customer service

This was a fun one! Take a look at 2 AI agents resolving a customer service claim with #OpenAI new #GPT4o.
Working with customers to build transformational solutions always gets me fired up. The potential solutions we can build with this new SOTA model has my head spinning! pic.twitter.com/86SNgNI6Tl

— Joe Beutler (@JoeBeutler) May 14, 2024

OpenAI’s GPT-4o is capable of engaging in natural and realistic voice conversations. This capability of ChatGPT makes it an ideal solution for building customer service chatbots, where two AI agents can collaborate to resolve customer service claims.

Real Time Translation

Live audience request for GPT-4o realtime translation pic.twitter.com/VSj5phFKM6

— OpenAI (@OpenAI) May 13, 2024

During the spring update event, OpenAI’s CTO, Mira Murati demonstrated the real-time translation capabilities of GPT-4o, successfully translating Italian to English and vice versa. This feature poses a significant threat to Google Translate and Duolingo, which offer similar services.

Interestingly, Duolingo stock fell 3.5%, wiping out ~$250M in market value, within minutes of OpenAI demoing the real-time translation capabilities of GPT-4o.

Human-Computer-Computer Interaction

Introducing GPT-4o, our new model which can reason across text, audio, and video in real time.
It's extremely versatile, fun to play with, and is a step towards a much more natural form of human-computer interaction (and even human-computer-computer interaction): pic.twitter.com/VLG7TJ1JQx

— Greg Brockman (@gdb) May 13, 2024

GPT-4o can reason across text, audio, and video in real-time. It’s extremely versatile, fun to play with, and is a step towards a much more natural form of human-computer interaction (and even human-computer-computer interaction). In this demo, you can see how OpenAI President Greg Brockman moderated a conversation between two ChatGPTs.

AI Education and Tutor

This demo is insane.
A student shares their iPad screen with the new ChatGPT + GPT-4o, and the AI speaks with them and helps them learn in *realtime*.
Imagine giving this to every student in the world.
The future is so, so bright. pic.twitter.com/t14M4fDjwV

— Mckay Wrigley (@mckaywrigley) May 13, 2024

In another demo presented by Khan Academy, a student shared their screen with ChatGPT using GPT-4o. ChatGPT assisted the student step-by-step in solving a mathematical problem. Unlike providing the entire solution at once, ChatGPT guided the student towards the solution. Additionally, students can also share their notebooks using their mobile camera, and ChatGPT will be able to understand the content.

Meeting AI with GPT-4o

One demo that's easy to miss, but I think is significant in what it shows is likely to be possible soon, is this demo — GPT-4o for meetings: https://t.co/UeT5285R9c

— Greg Brockman (@gdb) May 13, 2024

GPT-4o, through the desktop, can join online meetings and moderate them as well, giving its own valuable inputs, which can be crucial in making decisions. Moreover, it can transcribe and summarize meeting discussions in real-time, ensuring that no important details are missed and providing a reliable reference for participants.

Assistant for Visually Impaired Individuals

GPT-4o as tested by @BeMyEyes: pic.twitter.com/WeAoVmxUFH

— Greg Brockman (@gdb) May 14, 2024

BemyEyes, a mobile app designed for visually impaired individuals, tested GPT-4’s vision capabilities to assist a visually impaired person in navigating the city. ChatGPT was able to accurately identify the location and minute details of the surroundings.

Unlike human volunteers who may not be available at all times, GPT-4o can offer continuous support, ensuring that visually impaired users have access to assistance whenever they need it.

Interview Prep

Interview prep with GPT-4o pic.twitter.com/st3LjUmywa

— OpenAI (@OpenAI) May 13, 2024

In this demonstration, ChatGPT helps a candidate prepare for an interview. Using the front camera, ChatGPT can tell whether the candidate is dressed appropriately. Moreover, it can also help with preparations by conducting mock interviews and providing feedback on answers, highlighting strengths and areas for improvement to enhance performance.

Jam with ChatGPT

Lullabies and whispers with GPT-4o pic.twitter.com/5T7ob0ItuM

— OpenAI (@OpenAI) May 13, 2024

GPT-4o has a surprise talent – it can sing! Users can request personalised songs for special occasions like birthdays, anniversaries, or just for fun. The chatbot can generate a variety of tunes and melodies based on emotions or specific details provided by the user, from soft whispers to energetic anthems.

AI Coding Assistant

Live demo of coding assistance and desktop app pic.twitter.com/GlSPDLJYsZ

— OpenAI (@OpenAI) May 13, 2024

OpenAI has introduced the ChatGPT app for desktop. The app allows for voice conversations, screenshot discussions, and instant access to ChatGPT, acting as your friendly, go-to colleague in times of crisis. This is like an AI assistant who is always there to help you out. It can help you out with any problem you come across from writing codes to brainstorming ideas.

Rock, Paper, Scissors with GPT-4o

6. Rock, Paper, Scissors with GPT-4o pic.twitter.com/oMuMRRbrKO

— Angry Tom (@AngryTomtweets) May 13, 2024

With ChatGPT, you can enjoy playing fun games like Rock, Paper, and Scissors, with ChatGPT as the perfect referee. It can also hype you up and cheer for you during the game.

The post 10 Must Watch OpenAI GPT-4o Demos appeared first on Analytics India Magazine.

DSC Weekly 14 May 2024

Announcements

  • Once considered an afterthought, application security risk management is now an integral aspect of application development. The rise of cloud-native adoption and proliferation of microservices has enlarged the attack surface, requiring elevated security measures. Service mesh technologies and API gateways emerge as pivotal solutions, streamlining communication, enhancing reliability, and fortifying security. Join the Advancing Application Security Practices Summit to discover how to bolster your security posture, exploring ways to mitigate security vulnerabilities, manage risks, and fortify against cyberattacks.
  • Zero trust adoption has surged in recent years, driven by two main factors: 1). A wave of high-profile data breaches that highlighted the need for enhanced cybersecurity strategies and 2). The COVID-19 pandemic created the need for remote access technologies beyond VPN. While the zero trust model can be highly beneficial, it does have some challenges. That’s why making zero trust cybersecurity as effective as possible starts by understanding its challenges. In the upcoming The Zero Trust Journey: From Concept to Implementation summit, industry leaders, experts and practitioners provide resources and recommendations to help you build a zero trust framework.

Top Stories

  • Zero Trust Architecture and AI
    May 13, 2024
    by Dan Wilson
    During this very special 6th episode of the AI Think Tank Podcast, I had the honor to speak with Patrick Stingley, a seasoned data scientist and enterprise architect for the U.S. government, currently associated with the Bureau of Land Management. His decades of experience in various sectors of government service have endowed him with an intricate understanding of information technology, particularly within the realm of artificial intelligence (AI) and data security.
  • GenAI Creativity Exercise: Beyond Mere Productivity
    May 11, 2024
    by Bill Schmarzo
    Senior management seems infatuated with leveraging Generative AI (GenAI) to improve productivity. While productivity improvements are “nice,” they don’t necessarily equate to better, more effective, or more relevant outcomes. Focusing your GenAI initiatives on improving productivity is missing the more significant economic opportunity: the opportunity to leverage GenAI and Artificial Intelligence (AI) in general to create new sources of customer, product, service, and operational value.
  • Evaluating the impact of data analytics on user experience design in SaaS platforms
    May 9, 2024
    by Rob Turner
    The SaaS market is well established, with revenues predicted to top $282 billion this year, and strong annual growth expected to continue. This puts the emphasis on ensuring that user experience (UX) design is honed and refined as much as possible, as platforms which fall flat here can expect competitors to siphon off users in vast volumes.

In-Depth

  • Unleashing a new era of investment banking through the power of AI
    May 14, 2024
    by Aileen Scott
    Investment banking has become more prevalent, and AI is expected to revolutionize financial transactions. AI’s increasing power has made it a force in all industries, not just the finance sector. AI has revolutionized investment banking day-to-day activities, from automated trading to customer service automation.
  • Integrating microservices with legacy systems through API management
    May 13, 2024
    by Ovais Naseem
    In software architecture, the shift towards microservices has become a dominant trend. Microservices offer agility, scalability, and resilience, making them an attractive choice for modernizing IT infrastructures. However, many organizations grapple with the challenge of integrating microservices with their existing legacy systems.
  • Empowering businesses to make informed decisions
    May 13, 2024
    by Ovais Naseem
    In today’s data-driven business landscape, data quality enables organizations to make informed decisions. Data quality tools are pivotal in ensuring that businesses have access to accurate, reliable, and consistent data. By leveraging these tools, organizations can enhance their decision-making processes, improve operational efficiency, and establish a competitive advantage in the market.
  • Future of track and trace solutions for supply chain management
    May 9, 2024
    by Manoj Kumar
    Supply chain track and trace solutions is a system used to cover the inflow of goods from the point of origin to the point of destination. This system has revolutionized the way businesses manage their supply chain operations.
  • DSC Weekly 7 May 2024
    May 7, 2024
    by Scott Thompson
    Read more of the top articles from the Data Science Central community.

Accenture Appoints Arnab Chakraborty as its First Chief Responsible AI Officer 

Accenture recently appointed Arnab Chakraborty as chief responsible AI officer to scale and monitor AI systems responsibly, alongside enhancing clients’ growth and value across industries.

“Leaders acknowledge the importance of responsible AI principles, but there is a gap in their practical implementation—our research shows that only 2% of companies have fully operationalised responsible AI across their organisations,” shared Chakraborty.

With over two decades of expertise, Chakraborty holds ten patents in machine learning solutions for business challenges.

He has also been involved in shaping the WEF AI Governance Alliance and as a member of the US Senate AI Insight Forum, where he advises on the practical considerations of balancing AI innovation while mitigating risks.

“Clients are eager to embrace the potential of generative AI, and we are ready to help them build responsible AI into every use,” said Julie Sweet, chair and CEO of Accenture. “Our focus is to enable our clients to innovate AI safely and be ready to seize the opportunities that AI will bring in the decades ahead.”

Last year, in November, Accenture appointed Lan Guan as the company’s first Chief AI Officer, recognizing her significant contributions to the field and her leadership in data and AI practices.

Guan, who has been with Accenture for years, has a rich history of AI innovation, including building a robot to teach English to children in rural China at age 16.

“We have a lot of work to make sure this technology is democratised and not limited to a small group,” said Guan. “A lot of clients I talk to want to have this kind of leadership role directly reporting to the CEO, so that they have the right leader to help them understand their individualised roadmap and what areas within their entire value chain they should start with first.”

Accenture’s recent initiatives include expanding advisory and technology services to help companies establish and implement AI policies and standards. They are also introducing managed services to monitor AI solutions and ensure compliance with evolving regulations, such as the EU AI Act.

Accenture’s investment in AI is significant. The company recently recorded $1.1 billion from generative AI projects in the first half of the fiscal year, surpassing the combined revenue of all VC-backed startups in this sector.

By focusing on responsible AI and extensive collaboration with industry partners, Accenture is poised to lead the charge in AI innovation while ensuring ethical standards and practices are maintained.

“Accenture will pave the way to help our clients establish and embed responsible AI, closing the gap between principles and action,” added Chakraborty.

The post Accenture Appoints Arnab Chakraborty as its First Chief Responsible AI Officer appeared first on Analytics India Magazine.

Pursue a Master’s in Data Science with the 3rd Best Online Program 2024

Sponsored Content

Master's (MS) in Applied Data Science
Image by Bay Path University

“Completing the program has provided me with proficiency in essential data science methodologies and programming languages, including R, Python, SQL, and Tableau. Additionally, the program's flexibility allowed me to select project subjects aligned with my interests, fostering hands-on learning experiences. Through these projects, I gained practical experience applying a diverse set of data science skills and was able to build a diverse portfolio.”

– Aspen Gulley, G’23

Bay Path University’s Master of Science in Applied Data Science Degree Program Provides:

  • A career path in data science, regardless of your background and experience through two tracks: generalist and specialist
  • Flexibility for working professionals with convenient one and two-year schedules
  • Small class settings, led by an extraordinary team of faculty who teach and mentor students throughout the program
  • Hands-on application using essential programming languages such as Python, SAS, R, and SQL
  • A project-based curriculum teaching students to solve real-world business challenges,
    using both "small" and "big" data and cutting-edge practices in statistical modeling, machine learning, and data mining
  • A project-oriented capstone that will harness the skills gained throughout the program

Enrolling now for October 28th.

Learn More

More On This Topic

  • Pursue A Master’s In Data Science With The 3rd Best Online Program
  • Maximize Your Value With The 3rd Best Online Master’s In Data…
  • Advance your Career with the 3rd Best Online Master's in Data…
  • Master Data Science with the 3rd Best Online Program
  • Join UC's Information Session for the Master's in Business…
  • Top Free Data Science Online Courses for 2024