This new AI tool will summarize YouTube videos for you in seconds

Eightify screenshot

Watching a YouTube video, whether for class, entertainment or research, requires both time and focus — both of which can sometimes be difficult to find.

The Google Chrome extension Eightify is here to help.

Also: This AI chatbot can sum up a PDF and answer your questions about it

The extension can provide users with YouTube summaries powered by ChatGPT within ten seconds, allowing users to get the key information without having to sit through a long video.

The extension is also free to download, uses OpenAI's ChatGPT API and has over 40,000 users on Chrome Web Store.

Also: Today's AI boom will amplify social problems if we don't act now, says AI ethicist

Because this sounded too good to be true, I put it to the test. To get straight to the point, I was impressed.

Easy Installation

To install the extension, all I had to do was visit Eightify's page in the Chrome Web Store and click on "Add to Chrome."

The installation took under a minute, and I was immediately ready to start using the extension in YouTube.

Side note: Since you do need Chrome to use the extension, if you don't have that downloaded you will want to install Google Chrome first.

Testing the extension in YouTube

For the sake of this review, I used the extension on ZDNET's latest YouTube video.

Once I started playing the video in YouTube, an Eightify button appeared in two different spots, on the right hand banner and next to the like and dislike button.

To get started, I had to create an Eightify account, and once I did I was ready to start the summary generation.

Also: How to use Wordtune AI to rewrite texts on your iPhone

As advertised, within seconds, the extension took the original eight minute video and condensed it into bullet points under an insights and summary tab.

The summaries were easy to read, insightful and even included fun emojis to enhance reading experience.

Now for the most important part — accuracy.

Since I had already watched the entirety of the video beforehand, I was able to read through the Insights and Summary and verify the information.

Also: The best AI chatbots: ChatGPT and other noteworthy alternatives

The Insights tab was 100% accurate and presented all the major points of the video in a very digestible package. However, the Summary tab has one glaring error.

The second point in the AI-generated summary said, "The Fire HD 10 Plus tablet is a major upgrade with a sleek metal design, 64GB storage, Wi-Fi 6 support, and a fingerprint reader."

However, what ZDNET contributor Jason Cipriani said in the video was the exact opposite. When he was talking about those specific specs, he was referring to the Fire Max 11. When he did mention the Fire HD 10 Plus, he was actually talking about how underwhelming it was.

Also: How to use the Claude AI chatbot in Slack

Besides that one error however, everything was accurate and since the Summary does provide timestamps next to every bullet, it is easy to verify the information.

Pricing

Eightify is free to download from the Chrome Web Store, and you have access to three free summaries per week for videos 30 minutes or shorter.

If you need unlimited summaries with no time limit, you can pay for a subscription at $4.95 per month or $3.95 per month with a yearly subscription.

Bottom line

If you want summaries on YouTube videos, this is a convenient tool.

When I was in college, especially during the height of Covid when professors were constantly posting video lectures, a tool like this would have been extremely helpful in helping me better allocate my time.

The tool would have been helpful even if I did watch the video, because the eight-point summaries would have been a great way to ensure that I understood the major points made in the lecture.

Also: Your next phone will be able to run generative AI tools (even in Airplane Mode)

Like any AI model, this one is capable of getting information wrong. An easy way to circumvent this is by using the timestamps next to each summary point to visit the part of the video where that bullet point was taken from.

This way you can either confirm or deny the major point and you can still avoid watching the entirety of the video.

Harnessing Human Emotions in Generative AI

Harnessing Human Emotions in Generative AI

Thought Experiment

Imagine you’re browsing items in a supermarket (virtual or physical), and an AI assistant is available for you to interact with, to optimise your shopping for the day. You provide it with a budget, your household make-up, like details on the number of people, ages, dietary restrictions and it generates a shopping list with relevant & available items.
Based on your style of evaluating products (eg, reviews from other shoppers like you, images, use in recipes, comparing with local businesses) it also prioritises brands for you. It recognizes when you’re looking lost in the store and steps in to guide you with a map, or if the smells, sights and sounds of the store overwhelm you, it quickly re-routes you. You have a fun, swift, just-like-you-like-it shopping experience and are on your way out, sooner than expected. You can even thank it for reminding you to carry an umbrella on a suddenly cloudy day!
In this scenario, it’s easy to allow ourselves to ease into the idea of an empathetic, caring and understanding companion that wants and does what’s best for you.
Let’s try another one on for size:
You happen to miss a credit card payment because you were travelling and it slipped your mind. The automated service centre notices this lapse and while you’re away on vacation, starts sending you reminders. You ignore them because you don’t want to deal with them while on holiday, you don’t mind the small late fee – but they continue to escalate in intensity:
Day 1: “Your credit score has been affected by your missed payment. Do you not care about your financial future?” (guilt-tripping)
Day 2: “Don’t be irresponsible, pay now!” (sense of obligation)
Day 3: “Do you not understand the importance of being on time?” (targeting character)
Day 4: “If you don’t pay now, you may not enjoy the same privileges with our bank…” (threat)
The automated service centre has studied your payment patterns, your chats and call transcripts with the bank, and knows you value your reputation – it uses any means necessary to make you pay, as that’s what it’s been trained to do.
(If you’re reading this thinking ‘This is so extreme, this wouldn’t happen, I’m happy-sad to tell you these were all inspired by my exchange with ChatGPT earlier today)
In the pursuit of optimisation and automation, we may overlook both the best and worst case scenarios: In the best case scenario, this integration could lead to psychological safety, reflect human needs that are explicit and latent in experiences, and even be a desirable presence for non-task-based interactions. But the worst case scenario is much like the risk of anyone who knows you inside-out that you may not know very well – holding the power to be emotionally manipulative, exploitative, aggressive and having the potential to create unsafe environments for everyone including decision-makers, users and consumers.
At this stage, I’d like to share a conversation starter with you:

What kind of empathy and emotional response should we try to integrate in Generative AI solutions?
To explain, let’s break down the recognised types of Empathy*
*simplified
Cognitive empathy: Ability to understand how a person feels and what they might be thinking which means exploring the why of the feeling.
Emotional empathy/Affective empathy: Ability to feel or embody what someone is feeling which is essentially what mirror neurons do.

Behavioural empathy/Compassionate empathy: Acting upon what someone else is feeling and trying to help alleviate their distress in a way that works for them even if you don’t understand what they’re experiencing fully.

As a practitioner, my recommendation at this time is that Generative AI solutions should be trained to display Behavioural Empathy, without trying to develop Cognitive or Emotional empathy in them. This will allow AI to reflect human interest, emotion and need without developing tools to exploit them.
A few principles for implementing and broader usage of emotions in Generative AI:

– If the emotional theory and the logical foundation are flawed, implementations that are technically accurate will still be flawed experiences for users

– While there is growing evidence that AI now has a better understanding of sarcasm, explains humour, can even write convincing dialogue to mimic consciousness; AI can still be very literal in its interactions, while humans don’t tend to express emotions in simple and straightforward ways

– Think of automation in organisations and adoption as management journeys change; especially when considering business integration, making interactions more intuitive and aligned with employee expectations, addressing human concerns and increasing a sense of agency to reduce fear of being ‘replaced’

– Proactive intervention to balance untapped potential vs. negative exploitation:Bringing the perspective of business ethics, regulation, policy research, antitrust and addressing misinformation into our solutions

– Identifying latent biases and increasing representation in current training data: OpenAI’s models are as good as what it’s trained on, identifying ways to capture hidden bias in data, use cases and accounting for emerging human identities. Like, today there are several gender identities that would not have been captured a decade ago, that make up a significant portion of consumers today

– Managing people’s mass response to AI’s perceived & displayed emotions: There is a growing concern and excitement around AI developing ‘emotions’ of its own. While we may be far from sentient AI, it will still deeply affect human engagement (for business leaders, consumer interactions, and employees)
As you go back into your work day, here are a few use cases you could consider as good candidates for experimentation:
Note: I say experiments like these are not absolute foolproof solutions, they will be iterative – as human emotion is dynamic and will continue to be even as we learn to codify some aspects.

– People analytics: Determining the dynamic staffing of projects based on different work styles.

– Gaming: Finding the right level of challenge and achievement to keep a player engaged, while also dealing with new additions.

– Mental Health and Access to Care: Curating experiences that are aligned to people’s present state including recognizing signs of burnout, depression and other conditions.

– Education: Growth journeys in organisations for employees and in educational institutions for students mapped to individual styles of learning.

– Chatbots for Customer Service: Ability to reflect customer needs efficiently and respond appropriately.– Creative Arts & Tasks: Helping creatives with brainstorming, idea generation and inspiration.

The post Harnessing Human Emotions in Generative AI appeared first on Analytics India Magazine.

AI Cloud Wars: Azure AI vs Vertex AI

At the recent Microsoft Build conference, alongside 50+ AI updates, emerged a drinking game — take a sip every time Satya Nadella says “AI” or “Copilot”. And Twitter users warned of a potent hangover, as Microsoft integrated Copilot into almost every offering. Nevertheless, the tech giant also unveiled upgrades like ChatGPT integration to Bing, Microsoft Fabric, AI Hub, Azure AI studio, plugins and more.

Out of all, Azure AI received quite a lot of upgrades consisting of Azure AI Studio, provisioned throughput model, and plugins for integrating external data sources. Its content safety feature can detect harmful content in images and texts.

At Google I/O 2021, Microsoft’s friendly nemesis launched Vertex AI to build, deploy, and scale machine learning models faster and easier. Azure AI is a similar cloud-based service platform offering similar features. Now, let’s look at the difference between the two.

Read more: Now Everyone’s a Developer, Thanks to Microsoft

Vertex AI Vs Azure AI

One of Vertex AI’s key advantages is its support for MLOps practices. It provides tools and features for scaling, managing, monitoring, and governing ML workloads, ensuring efficient and responsible ML development. Additionally, Vertex AI optimises infrastructure, reducing training time and costs. The platform offers comprehensive ML tooling, including APIs, foundation models, and open source models through the Model Garden. It also provides end-to-end MLOps capabilities, automating and standardising ML projects throughout the development life cycle.

On the other hand, Azure ML is a key component in providing developers with tools to construct, train, and implement machine learning models. It supports popular programming languages like Python and R, enabling efficient model building and deployment. Vertex AI emphasises the integration of data and AI, seamlessly integrating with popular tools like BigQuery, Dataproc, and Spark.

This integration allows users to leverage BigQuery ML and facilitates accurate data labelling through Vertex Data Labelling. On the engineering side, Azure Databricks allows effective processing and analysis of large datasets. It supports Python, R, and Scala, facilitating data manipulation and preparation for accurate machine learning models. Just like Azure, Vertex AI caters to users with varying expertise levels through its low-code and no-code tooling.

Another feature that Microsoft brings to the table is its Azure Bot Service, tailored for chatbot and conversational AI applications. It simplifies development and deployment, improving customer service and automating processes.

Similarly, Google’s Dialogflow service is a cloud platform for constructing and launching conversational AI agents. It boasts a comprehensive suite of capabilities, including an adept natural language comprehension mechanism, a dialogue management engine capable of generating and answering diverse prompts and inquiries, a collection of pre-designed intents and entities for swift and effortless creation of intricate chatbots, as well as integrations with various Google Cloud Platform services like Firebase Cloud Storage.

Finally, Vertex AI offers an open and flexible AI infrastructure, supporting various ML infrastructure and model deployment options. It seamlessly integrates with MLOps tools, enabling users to scale model deployment, reduce inference costs, and effectively manage models in production.

Big Bets on AI Studio

Google launched Generative AI Studio at its recent developer conference that allows users to interact, fine-tune, and deploy foundational models. It offers features like chat interface, prompt design, and model weight adjustment. Their Model Garden provides enterprise-ready foundation models, task-specific models, and APIs. Users can utilise models directly, fine-tune them in the Generative AI Studio, or deploy them to data science notebooks.

Following Google’s suit, Microsoft also launched Azure AI Studio, enabling developers to create personalised AI chatbot copilots using OpenAI models and their own data. This new feature enhances Microsoft’s Azure OpenAI Service by extending its AI capabilities. In January, Microsoft officially released Azure OpenAI Service, and in March, it announced the availability of OpenAI’s GPT-4 within the service.

Who is Winning?

Just like in any market, having a variety of options is good for the consumer. Both choices are equally valid, but they encounter a challenge in gaining widespread acceptance among businesses. Vertex AI entered the market as the pioneering product, yet it hasn’t yet gained significant traction among enterprises. The current resistance stemming from established data frameworks hinders companies from transitioning directly to Vertex. It is likely that Microsoft’s offering will encounter similar obstacles.

Increasing competition is yet another obstacle, as according to several users, AWS leaves Azure and Vertex behind. AWS may be the better choice if your work involves Linux programming, while Azure aligns well with Microsoft users. AWS provides greater depth and control but has a higher learning curve. It also boasts a larger online community. Azure is favoured by established companies, while AWS is popular among start-ups. Along similar lines, many believe AWS and Google Cloud Platform have more value than Azure, but Azure is still useful. But with the integration of AI in Azure products, the scenario might be changing.

Nevertheless, both these services reduce the obstacles for developers who are eager to delve into the AI-driven data science ecosystem.

Read more: Microsoft’s Blizzard Acquisition Comes With An AI Twist

The post AI Cloud Wars: Azure AI vs Vertex AI appeared first on Analytics India Magazine.

Dell’s sustainable data center management strategy: Interview with expert Alyson Freeman

A globe with a tree growing out of it on top of a bunch of glowing circuitry.
Image: peach_adobe/Adobe Stock

At Dell Technologies World in Las Vegas, sustainability and an increasing need for processing power were both hot topics. I spoke with Alyson Freeman, sustainability product manager at Dell’s Infrastructure Solutions Group, to discover what Dell is doing in the field of sustainable data center management.

In addition, Freeman pointed me to Dell’s overall sustainability goals. By 2030, Dell aims to use 100% renewable packaging. By 2040, Dell plans that all of the electricity it uses will come from renewable sources. By 2050, Dell plans to reach net zero greenhouse gas emissions in accordance with the U.S. Environmental Protection Agency’s Scopes 1, 2 and 3.

The following is a transcript of my interview with Freeman. It has been edited for length and clarity.

Jump to:

  • Carbon footprint tracking in the data center
  • More data, less energy
  • Sustainability-as-a-service
  • What else businesses should consider around energy use
  • Dell’s recycling measures
  • The impact of artificial intelligence on sustainability
  • Dell’s sustainability wins

Carbon footprint tracking in the data center

Megan Crouse: What is your overall philosophy when it comes to identifying sustainability action opportunities for Dell?

Alyson Freeman: Clearly, it is based on the carbon footprints of our products. This becomes a very easy way to prioritize among the initiatives because then you know which ones are going to have the most impact.

[In terms of] laptops and desktops, their biggest impact on the environment is in the manufacturing phase. That’s why you see a lot of innovation around recycled plastic, bio-rubber and reclaimed carbon fiber.

SEE: Reaching environmental goals with advanced analytics can be good for your business.

On the data center side, we can take all those learnings from our Client Solutions Group (notebooks and peripherals) and apply them. But our biggest environmental impact is in the use phase — the energy it takes to run our equipment. The innovations you’ll see from us are around hardware and software for energy efficiency.

On the hardware side, we’re looking at more efficient air cooling, chassis configuration [and] liquid cooling options. Software [can] help manage that energy using Power Manager or CloudIQ to find under-utilized equipment in the data center. That is one of the biggest wastes of energy in a data center, and those are the types of things we work on.

Megan Crouse: A big issue in data centers is servers and cooling; this might be handled by chassis flow optimization and energy-efficient fans. What other developments from Dell reduce energy use in this area?

Alyson Freeman: For that, I would still answer software. It’s not all hardware for energy efficiency. You can’t improve what you can’t measure, so telemetry is very important. You need to understand where your power is going and how it’s being used.OpenManage Enterprise Power Manager is on our servers, and it allows you to find those under-utilized or “zombie” servers and place power caps. It’s a one-to-many management system, so you can measure an entire data center at a time, and it will also measure your carbon footprint. Similarly, CloudIQ is on the storage side. It’s our power management software that will also measure carbon footprint.

Megan Crouse: What reduction comparison period is meaningful? Meaning, if you can show the data center uses less power today than yesterday but much more than three years ago, what is your corporate responsibility in regards to keeping track of that?

Alyson Freeman: I’m sure there’s a corporate responsibility, but our customers want it. They want to know when it makes the most sense to do a tech refresh and invest in a newer, more efficient technology and how much that’s going to change their energy bills over time. We have an assessment to help customers quickly understand — it’s not a super detailed calculation — that trade-off between the tech refresh and how much energy savings and carbon savings you’ll have over time. There isn’t a one-size-fits-all answer. It really depends on the customer and which products and configurations they have.

More data, less energy

Megan Crouse: At the conference on Monday during the analyst Q&A, Michael Dell talked about how organizations want to crunch a massive amount of data while using less energy. How is Dell working on that?

Alyson Freeman: We know that the world is going to need more and more data, especially with all of the generative AI, and our responsibility is to do that as efficiently as possible. AI can help us solve some of these energy issues. There isn’t an off-the-shelf solution today since generative AI is so new …

What can new data insights tell you about how to be more efficient and how to ensure you’re using renewable energy at the right time and putting workloads in the right place? You’re always going to need the latest products. You need the best processors to run AI. But that doesn’t mean your old server isn’t useful anymore.

Megan Crouse: Is that connected to the concept of “zombie servers?”

Alyson Freeman: Exactly. You want to make sure that if it’s plugged into the wall, you’re getting compute benefit from it.

Sustainability-as-a-service

Megan Crouse: Dell has pitched the idea of as-a-service as a replacement for the “buy and replace” model. How does this reduce energy consumption?

Alyson Freeman: There are a lot of ways the as-a-service model can be more sustainable. One is we can take care of the end-of-life for you. We can take back the old equipment. There isn’t a burden on every individual buyer to figure out what to do with their old equipment.Another is we can help control where your products are being used. For example, if they’re in a co-located data center powered by renewable energy, we can make sure that is more sustainable.

What else businesses should consider around energy use

Megan Crouse: What other decisions should organizations consider around replenishment and modernization?

Alyson Freeman: You look at the return on investment. How much is this more efficient product going to cost me? How much am I going to save on energy bills over time? How much am I not going to spend on carbon offsets because I’m using this?

Data centers are a rare area where you don’t have a trade-off between your business cost and your sustainability decisions. Sometimes, the more sustainable option might cost a little bit more. In the data center, we’re lucky because the more sustainable option does cost less in your energy bills every month.

Dell’s recycling measures

Megan Crouse: What is the state of renewables in Dell’s infrastructure manufacturing supply chain such as in the PowerEdge servers with their minimal use of paint?

Alyson Freeman: You have to design with the end in mind. If you want to recycle a server, ultimately, you can’t do it if there’s glue on it or if the plastic is mixed with the aluminum. We’re learning from our take-back programs about what’s most important to be able to recycle the most amount of material, and we have to design it in from the very beginning or else it won’t happen at the end of its life. That’s why it’s critical.

The impact of artificial intelligence on sustainability

Megan Crouse: Some studies have found that large natural-language processing artificial intelligence takes massive amounts of energy. What do you see customers talking about in terms of the sustainability of AI, and how does that impact the way Dell makes and works with AI?

Alyson Freeman: I don’t get a lot of large-language model questions, but I think the world is going to need more data, and we have to do it as sustainably and as efficiently as we possibly can.

Dell’s sustainability wins

Megan Crouse: What are you especially proud of in terms of Dell sustainability right now?

Alyson Freeman: Real-time telemetry measurement of carbon footprints. That is something that none of the large competitors do, and it takes a lot of work behind the scenes to get that into our products. I think it’s one of the things that will make a huge difference because we’re empowering all of our customers to have knowledge about their carbon footprint they didn’t have before, and that will drive a lot of change. It’s a multiplicative effect.

More news from Dell Technologies World

  • Dell brings more cloud products under APEX umbrella
  • Dell’s Project Helix is a wide-reaching generative AI service
  • Dell’s Project Helix heralds a move toward specifically trained generative AI
  • Dell reveals new edge as-a-service portfolio, NativeEdge
  • Dell VP on the changing world of DevOps, CloudOps, AI and multicloud by design

Disclaimer: Dell paid for my airfare, accommodations and some meals for the Dell Technologies World event held May 22-25 in Las Vegas.

Executive Briefing Newsletter

Discover the secrets to IT leadership success with these tips on project management, budgets, and dealing with day-to-day challenges.

Delivered Tuesdays and Thursdays Sign up today

How to use Wordtune AI to rewrite texts on your iPhone

sample-image-16-9-red.jpg

You're emailing or chatting with someone on your iPhone and could use a helping hand to generate the right text. One tool that can assist you is the Wordtune AI app from AI21 Labs. Accessible as a third-party keyboard for iOS, Wordtune can write content based on your description.

Also: The best AI chatbots to try

You can tell Wordtune to write something from scratch or rewrite existing text that needs fine-tuning. The app can generate simple messages and emails, photo captions, LinkedIn or Twitter posts, cover letters, blog posts, and more.

Released in December of 2022, the original version of Wordtune was designed to simply rephrase and rewrite your sentences. But with the latest update, the app is able to generate the content you need from the get-go. You can enter text via the Wordtune keyboard by typing it out or by speaking it via the built-in microphone on your iPhone. Since it works as a keyboard, you can use it with any text-based app.

Also: Meet Aria: Opera's new built-in generative AI assistant

Wordtune also works on an iPad, though it runs in iPhone compatibility mode so it appears as a small window. An Android version is in the works. Beyond enlisting Wordtune as a keyboard, you can generate text directly through the mobile app itself and through the Wordtune website.

The free flavor of the app limits you to 10 instances of rewriting per day. For $25 a month or $120 a year, a premium edition offers unlimited writing and rewriting and will rewrite entire paragraphs. Now, here's how Wordtune works.

How to use Wordtune AI on your iPhone

More on AI tools

OpenAI could ‘cease operating’ in EU countries due to AI regulations

WASHINGTON, DC - MAY 16: Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC. The committee held an oversight hearing to examine A.I., focusing on rules for artificial intelligence. (Photo by Win McNamee/Getty Images)

Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC.

Sam Altman, OpenAI CEO, said the company could stop all operations in the European Union if it can't comply with impending artificial intelligence regulations.

During a stop of what he's called the "OpenAI world tour," Altman addressed the University College London to speak about the company's advancements and was asked about the EU's proposed AI regulations. The CEO explained OpenAI has issues with how the regulations are written at this time.

Also: 3 ways OpenAI says we should start to tackle AI regulation

According to Time, the regulations, which are still being revised, may designate the company's ChatGPT and GPT-4 as "high risk," requiring increased safety compliance.

"Either we'll be able to solve those requirements or not. If we can comply, we will, and if we can't, we'll cease operating…," Altman said. "We will try. But there are technical limits to what's possible."

Altman explained he didn't believe the law was fundamentally flawed but stressed that the details were critical. He expressed support for a balanced approach to regulation but acknowledged the risks of AI, in particular for AI-generated misinformation with the potential to sway public opinion.

Also: Meet Aria: Opera's new built-in generative AI assistant

On this, however, Altman maintained that AI language models are less influential in spreading misinformation compared to social media platforms. "You can generate all the disinformation you want with GPT-4, but if it's not being spread, it's not going to do much," he added.

Protesting AGI development

During his appearance at University College London, Altman was greeted by protesters gathered outside. The group expressed concerns over OpenAI's role in the future development of artificial general intelligence (AGI), a super intelligent AI system that could surpass human intelligence.

The group held signs that read "Don't build AGI" and "OpenAI, stop trying to build AGI," as shared by Twitter user James Vincent. This came after a statement from OpenAI on the development of artificial general intelligence, the company's view of AI regulation, and what consequences the lack of regulation could have on humanity.

Also: AI is coming to a business near you. But let's sort these problems first

"It's time that the public step up and say: it is our future and we should have a choice over it," protester Gideon Futterman told Time. "We shouldn't be allowing multimillionaires from Silicon Valley with a messiah complex to decide what we want."

The creation of AGI is a hot topic among experts and ethicists, though still considered to be far from becoming reality.

More on AI tools

Open-source AI holds a party in New York thanks to Stability.ai and Lightning.ai

emad-mostaque-and-william-falcon-on-stage-may-2023-smaller

Emad Mostaque, left, of Stability.ai, joined William Falcon of Lightning.ai on stage to emphasize the essential nature of open-source AI software.

At a celebratory meetup last Friday evening on Manhattan's West Side, two of the leading young artificial intelligence software firms, Stability.ai and Lightning.ai, rallied software developers and startup owners to the cause of keeping AI open-source.

The hashtag for the evening: #keepAIopensource.

Also: The best AI chatbots: ChatGPT and other noteworthy alternatives

About 700 guests, including developers, representatives of large tech firms, and startup owners, gathered for hors d'oeuvres and drinks as sunset streamed through the windows of the sixth-floor event space of Glass House by the Hudson River. The lively crowd huddled around laptops on high-top tables to discuss demos of programs making use of AI algorithms in one way or another.

Banners on the stage showed a colorful illustration of what appeared to be coders cheerfully working together, with the slogan "Keep AI open source."

Shortly before 8 p.m., Emad Mostaque, founder and CEO of Stability.ai, took the stage with Lightning.ai CEO William Falcon. The two took turns presenting their pitch for why AI should be kept an open-source affair.

Also: Why open source is essential to allaying AI fears, according to Stability.ai founder

Stability.ai is best known for Stable Diffusion, a service that lets you type a phrase and have it converted into an image in any number of styles.

Lightning.ai is best known for a library of functions that plug into PyTorch to smooth the task of training programs including the large language models that underly OpenAI's ChatGPT.

The evening's event was clearly a reference to the sudden tilt by prominent companies to keep secret the technical details and the source code for AI programs. In March, OpenAI broke with precedent in the field by declining to disclose technical details of its latest large language model, GPT-4.

About 700 guests, including developers, representatives of large tech firms, and startup owners, gathered for hors d'oeuvres and drinks as sunset streamed through the windows of the sixth-floor event space by the Hudson River.

Google emulated OpenAI this month when it declined to disclose technical details of its new PaLM 2 language model.

Both gestures break with decades of disclosure by AI scientists, and luminaries of the field have warned that such secrecy can have a chilling effect on AI research. Mostaque has warned that closed-source AI programs are incompatible with the role of AI going forward, and he has pledged to be "the leader of open even as everyone else does closed."

Also: Google follows OpenAI in saying almost nothing about its new PaLM 2 AI program

Falcon presented a mash-up of media messages and lessons from the history of open-source software. He noted that the company's software had built upon the success of Torch, developed 20 years ago, which was later integrated into Meta's PyTorch library. "Stuff that's possible today wouldn't have been possible without the Torch people," said Falcon.

Falcon played the original Macintosh computer commercial of 1984, evoking Big Brother, on the stage's projection screen, and also called up images from Apple's iconic "Think Different" ad campaign when the late Steve Jobs returned to the company.

Also: ChatGPT's success could prompt a damaging swing to secrecy in AI, says AI pioneer Bengio

"At the end of the day, a lot of things are happening," said Falcon. "It [the AI movement] could go fully closed, or fully open."

Alluding to remarks by OpenAI CEO Sam Altman that code needs to be kept secret to protect the world from bad actors, Falcon remarked, "Any tech we create, someone will find a bad use for it."

Falcon ended with a photograph of Jobs giving a defiant gesture to IBM, to applause and laughter from the audience.

Taking his turn at the microphone, Mostaque told the audience that the battle for open-source software is about a tool, AI, that allows people to tell stories. "You should be able to tell your stories," he told the audience.

Also: With GPT-4, OpenAI opts for secrecy versus disclosure

"We want to bring Stability to every country in the way that the people in that country will be able to control their own destiny," said Mostaque. "Because that is the story of your life, not a panopticon that is controlled by a few companies."

The large language models are "so important they will rule our lives," and therefore, "should not be closed," said Mostaque. "They should not be black boxes because who will be making the decisions?"

Mostaque predicted that the use of such programs will spread globally, and emphasized that populations in each country will have to use them to relate their own narrative.

"Why should an entire nation not be able to create?" said Mostaque.

Also: 3 ways OpenAI says we should start to tackle AI regulation

"What happens when we give one of these models to every child on earth?" He urged: "Think about it: Does anyone think that every child on earth will not have their own AI?"

"That is the mission," said Mostaque, "to activate humanity's potential."

Artificial Intelligence

Google ads will be customized for you using generative AI

Google ads with generative AI

Generative AI is actively changing the way things are done, including online advertising. A growing number of businesses are turning to generative AI to support their commercial and marketing goals.

For instance, Meta added generative AI tools to its ads and Bing introduced ads into its massively popular AI chatbot Bing Chat. Now Google joins the list.

Also: 3 ways OpenAI says we should start to tackle AI regulation

AI has always been a part of Google Ads, but now Google is introducing generative AI advancements to the platform.

Google is adding a conversational experience within Google Ads to help businesses get their campaigns out the door quicker and more efficiently.

Using this tool, businesses can ask Google AI for ideas to jumpstart their creative projects and even have it generate keywords, headlines, descriptions, images, and other campaign assets, according to Google.

Google also is supercharging its automatically created assets (ACA), which use content from a business's landing pages and ads with generative AI. Now ACA will be able to create and adapt Google Search ads based on what a user inputs into the search query.

Also: Google upgrades its AI flood-forecasting technology to help more people

For example, Google says that if someone searches for "skin care for dry sensitive skin" Google AI can use a business's content and existing ads to create a new query that better matches the search.

In this case, the ad's revised headline would say "Soothe Your Dry, Sensitive Skin" to better align with the search.

Personalized ads can help businesses better reach their audience and help consumers find what they need more easily. The manipulation of headlines to say what the user wants to hear does raise the question of whether the ad is actually fit to meet the user's needs or just being made to appear that way.

Also: Every major AI feature announced at Google I/O 2023

Lastly, Google is incorporating generative AI in Performance Max ad campaigns to assist businesses in the creation of custom assets. This feature will also be available in the new conversational experience in Google Ads.

Google

With new grant program, OpenAI aims to crowdsource AI regulation

With new grant program, OpenAI aims to crowdsource AI regulation Kyle Wiggers 7 hours

OpenAI says it’s launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow — “within the bounds defined by the law.”

The launch of the grant program comes after OpenAI’s calls for an international regulatory body for AI akin to that governing nuclear power. In its proposal for such a body, OpenAI co-founders Sam Altman, Greg Brockman and Ilya Sutskever argued that the pace of innovation in AI is so fast that we can’t expect existing authorities to adequately rein in the tech — a sentiment today’s announcement captures, as well.

Concretely, OpenAI says it’s seeking to fund individuals, teams and organizations to develop proof-of-concepts for a “democratic process” that could answer questions about guardrails for AI. The company wants to learn from these experiments, it says, and use them as the basis for a more global — and more ambitious — process going forward.

“While these initial experiments are not (at least for now) intended to be binding for decisions, we hope that they explore decision relevant questions and build novel democratic tools that can more directly inform decisions in the future,” the company wrote in a blog post published today. “This grant represents a step to establish democratic processes for overseeing … superintelligence.”

With the grants, furnished by OpenAI’s nonprofit organization, OpenAI hopes to establish a process reflecting the Platonic ideal of democracy: a “broadly representative” group of people exchanging opinions, engaging in “deliberate” discussions, and ultimately deciding on an outcome via a transparent decision-making process. Ideally, OpenAI says, the process will help to answer questions like “Under what conditions should AI systems condemn or criticize public figures, given different opinions across groups regarding those figures?” and “How should disputed views be represented in AI outputs?”

“The primary objective of this grant is to foster innovation in processes — we need improved democratic methods to govern AI behavior,” OpenAI writes. “We believe that decisions about how AI behaves should be shaped by diverse perspectives reflecting the public interest.”

In the announcement post, OpenAI implies that the grant program is entirely divorced from its commercial interests. But that’s a bit of a tough pill to swallow, given OpenAI Altman’s recent criticisms of proposed AI regulation in the EU. The timing seems conspicuous, too, following Altman’s appearance in front of the U.S. Senate Congressional Committee last week, where he advocated for a very specific flavor of AI regulation that’d have a minimal effect of OpenAI’s technology as it exists today.

Still, even if the program ends up being self-serving, it’s an interesting direction to take AI policymaking (albeit duplicative of the EU’s efforts in some obvious ways.) I, for one, am curious to see what sort of ideas for “democratic processes” emerge — and which applicants OpenAI ends up choosing.

Folks can apply to the OpenAI grant program starting today — the deadline is June 24 at 9 p.m. Once the application period closes, OpenAI will select ten successful recipients. Recipients will have to showcase a concept involving at least 500 participants, publish a public report on their findings by October 20 and open-source the code behind their work.

You Are to be Blamed for ChatGPT’s Flaws

In ChatGPT, We Trust

Should we trust ChatGPT? Bard and ChatGPT claim to give precise answers to all the world’s questions until hallucination kicks in. They might be a big source of information for many, but at the same time, they are criticised over biases and misinformation. Regulatory authorities and governments all over the world have been breaking a sweat over controlling the misinformation being spread through these platforms.

During the US Senate hearing on AI oversight, Open AI CEO Sam Altman said, “This [ChatGPT] is a tool that can generate content more efficiently than ever before. Here, the user can test the accuracy, change it if they don’t like it and get another version. But the content generated still spreads through social media, texts, or other similar ways.”

He explained how the interaction with ChatGPT is a single-player experience where the user is interacting with just a tool generating content, and not sharing it without their consent.

Altman, too, expressed concerns about the technology and called for regulations around it. But when asked about the impact a technology like this can have on elections or spreading misinformation, he agreed with the premise, but argued for a different lens to frame regulations around this, not like social media.

AI takes the fall for bad actors

Speaking of misinformation, the chatbot can sometimes be forced to generate false information by the user with certain prompts. Even in cases like these, ChatGPT often refuses to generate certain stuff that can be hateful or false. So, the blame appears to be more on the side of the user, than ChatGPT.

If someone wants to create a fake ad or some false information about elections, they can do that regardless of ChatGPT or similar technology. Even before such AI models, the internet was filled with misinformation splattered on Twitter, which spread like wildfire – about COVID vaccines or presidential elections. The recent coverage of the fake photograph of an explosion at the Pentagon was another example of how people are vulnerable to misinformation on social media.

Not just with AI, similar photographs can be made using Photoshop. The person who believes the photograph is true and spreads it on social media, or news platforms, is the one responsible to verify its origin.

We are not saying that generating fake information is fine, but instead of vilifying the tool used for creating it, punish the user who makes and spreads it. And not just the creator of the content, it is also the responsibility of the person who consumes and shares it on social media platforms to verify the content. In certain cases, the latter should be held more responsible for the spreading of misinformation, than the former for creating it for it only becomes news when it gets a platform.

Blame the data

“As an AI language model, I cannot…,” is one phrase that you would get for a lot of prompts you input on ChatGPT. Even ChatGPT realises that it is incapable of generating a lot of things, a lot of which can be misinformation. Or even if it does, it declares that it “may not be true”. OpenAI puts a clear disclaimer on its website – ChatGPT may produce inaccurate information about people, places, or facts. Does any publishing website put up such disclaimers? The company has recently provided even more control of users’ data to protect privacy.

There have been cases where ChatGPT made false accusations on people, and is now facing lawsuits. In certain instances, it is also known for making up anonymous sources and making itself sound profound while doing so.

You might also know a lot of people who do so. And anyway, it is trained on the information freely available on the internet. The same internet, which is filled with conspiracy theories, false information, and hateful language. OpenAI has done a great job of fine-tuning it to not spew garbage and be civil.

The information that it is fed is hundreds and thousands of websites, including a lot of them that are filled with confidently written lies and manipulated facts. No one can claim that all information on the internet is true and trustable. Would be interesting to see Elon Musk’s chatbot fed on Twitter data.

Same is the case for ChatGPT – you can choose to trust it and spread misinformation, even if OpenAI tells you not to.

Internet, ChatGPT Caught in Cycle

In the end, as Yann LeCun put it, it is just a text generator. A person can write the same misinformation on social media without using AI. As Altman said at the Senate that OpenAI already has placed models that can detect if text is generated from ChatGPT or not, the future looks safe for people concerned with AI spreading misinformation.

On a concerning note, the internet is getting filled with a lot of content generated by ChatGPT and similar models. A lot of it is evident because people post content without even verifying or editing them. “As an AI language model” phrase is found on several websites that are polluting the internet.

It’s a full cycle. ChatGPT is fed on internet data and now the internet is filled with articles written by AI. Now, with Google Bard being connected to the internet in real-time, and ChatGPT being connected to Bing for real-time internet access, the case might get worse — models being trained on the same data that they generated!

This begs the question – Is the recent development of connecting chatbots to the internet really a good idea? When ChatGPT had a knowledge cut off of 2021, people initially trusted it, but with time somewhat the trust fell off.

Now that Google and Microsoft are claiming that the models would improve with real-time internet access, users would start believing it as a source of information. This can possibly spread a lot of misinformation directly to the users – combination of real-time information with hallucinating LLM chatbots.

Even then, the one to be blamed is the internet and its data, not ChatGPT or Bard. It’s the data generated by humans that is polluting ChatGPT responses.

The post You Are to be Blamed for ChatGPT’s Flaws appeared first on Analytics India Magazine.