Salesforce Chief Ethicist Deems Doomsday AI Discussions a ‘Waste of Time’

At some point over the past year, you are likely to have engaged in conversations about the prospect of superintelligent AI systems taking over human workers. Or perhaps even more alarmingly, AI ushering in a world reminiscent of science fiction doomsday scenarios.

Paula Goldman isn’t with you on this. The chief ethical and humane use officer at Salesforce believes such discussions are a total ‘waste of time’.

AI systems are already in the copilot phase and are only going to get better over time. “With each advancement in AI, we’re continually refining our safeguards and addressing emerging risks,” she told AIM in an exclusive interview on the sidelines of TrailblazerDX 2024, Salesforce’s developers conference.

The field of AI ethics is not a recent development, Goldman said, it’s been in progress for decades. Issues such as accuracy, bias, and ethical considerations have long been studied and addressed.

“In fact, our current understanding and solutions are built upon this foundation of past research and development. While predicting the future capabilities of AI is challenging, our focus should be on continuously enhancing our safeguards to manage potential risks effectively, ensuring we’re equipped to handle even the most advanced AI technologies,” she added.

Big Fan of the EU AI Act

As AI becomes increasingly ubiquitous, the focus has shifted to its ethical implications, prompting governments and lawmakers to introduce stringent policies regarding its use.

The European Union (EU) became the first jurisdiction in the world to introduce the AI Act, the first of its kind, to regulate the technology, which, many lawmakers in the EU believe, could pose potential harm to society if left unregulated.

While different jurisdictions are approaching AI regulation differently, including India, Goldman, in fact, stated that she is a big fan of the EU’s AI Act.

“While there’s ongoing debate about regulating AI models, what’s truly crucial is regulating the outcomes and associated risks of AI, and that’s the general approach the EU AI takes,” Goldman said.

The Act categorises AI systems into four risk levels – unacceptable risk, high risk, limited risk, and minimal or no risk. It focuses on identifying and controlling the highest-risk outcomes, such as fair consideration in job applications or loan approvals, which significantly impact people’s lives.

“Moreover, the EU applies standards not only to those creating the models but also to the apps built on them, the data used, and the companies utilising these products. This comprehensive approach, treating it as a layered process, is often overlooked but is essential for effective regulation.”

However, with regulation comes the fear of hampering innovation, stifling creativity, and impeding the pace of technological advancement.

Nonetheless, Goldman believes that AI regulation is urgently important. “These regulations should be established through democratic processes and involve multiple stakeholders. I am proud of the efforts being made in this regard and emphasise the significance of regulations that transcend individual companies,” she said.

Human at the Helm

At TrailblazerDX, Goldman and her colleagues, which includes Silvio Savarese, chief scientist at Salesforce, stressed on the importance of building trust in AI among consumers.

Savarese even stated that the inability to build consumer “trust could lead to the next AI winter”. Goldman too, along with her colleagues emphasised the critical need to establish transparency and accountability in AI systems to foster consumer trust and prevent potential setbacks in AI adoption.

“At Salesforce, we believe trusted AI needs a human at the helm. Rather than requiring human intervention for each AI interaction, we’re crafting robust controls across systems that empower humans to oversee AI outcomes, allowing them to concentrate on high-judgement tasks that demand their attention the most,” Goldman said.

Salesforce’s approach has been to empower its customers by handing over control, acknowledging that they are best positioned to understand their brand tone, customer expectations, and policies.

“For instance, with the Trust Layer, we plan to allow customers to adjust thresholds for toxicity detection according to their needs. Similarly, with features like Retrieve Augmented Generation (RAG), customers can fine-tune the level of creativity they desire in AI-generated responses.

“Additionally, the incidents concerning AI ethics underscore the importance of government intervention in establishing regulatory frameworks, as these issues may vary across different regions and cultures. Hence, AI regulation by governments is deemed crucial,” she added.

Safeguarding is a Balancing Act

Moreover, as companies ship their AI products, it also becomes critical for companies to work with their customers to ensure ethical and responsible use and eliminate risks.

“At Salesforce, we release products when we deem them ready and responsible, but we also maintain humility, recognising that technology evolves continuously. While we rigorously test products internally from various angles, it’s during pilot phases with customers that we truly uncover potential issues and areas for improvement,” she said.

According to Goldman, this iterative process ensures that products meet certain thresholds before release, and the company continues to learn and enhance them in collaboration with its customers.

“It’s about striking a balance between confidence in our products and openness to ongoing refinement.”

The post Salesforce Chief Ethicist Deems Doomsday AI Discussions a ‘Waste of Time’ appeared first on Analytics India Magazine.

OpenAI is Opening its New Tokyo Office This Month

Exactly a year after announcing its plan to open an office in Japan, OpenAI is opening its office this month in Tokyo. This would be the company’s first office in Asia.

According to reports, OpenAI is also planning to release updates to its AI model in Japanese language.

The Tokyo office would be OpenAI’s third expansion after London and Dublin.

OpenAI’s CEO, Sam Altman had said that he is considering expanding services by opening an office in Japan when he met Japan’s Prime Minister Fumio Kishida where they spoke about the merits and the risks of privacy and security of the technology, as told by Hirokazu Matsuno, the chief cabinet secretary, according to reports by Reuters.

Matsuno said that Japan is currently evaluating the possibilities of introducing OpenAI’s technology in the country, said Matsuno.

After meeting with Kishida, Altman told reporters, “We hope to … build something great for Japanese people, make the models better for Japanese language and Japanese culture.” This is one of the first stops of Altman’s world tour after the launch of ChatGPT.

Furthermore, Taro Kono, who is responsible for Japan’s digital transformation in the cabinet, expressed optimism that AI technologies would play a significant role in the government’s workstyle reforms. However, he acknowledged that introducing ChatGPT into public offices would be challenging in the near future due to issues such as the potential for the technology to produce false information.

Apart from Japan, in December, OpenAI had also announced plans to start an office in India. Rishi Jaitly, who has held executive positions including the position of Vice President at Twitter, will assume the role of a senior advisor at OpenAI to guide the company through India’s AI policy and regulatory environment. Furthermore, OpenAI executives Anna Adeola Makanju , global head of Public Policy, James Hairston, and Jaitly recently met MoS for Electronics and Information Technology, Rajeev Chandrashekar.

The post OpenAI is Opening its New Tokyo Office This Month appeared first on Analytics India Magazine.

Hume AI’s Chatbot is A Chatty Stranger Who Never Shuts Up! 

“Is this even real?” was our first reaction when we tried out EVI. Hume AI took everyone by surprise with the release of EVI last week. Named after Scottish philosopher David Hume, the company gave the world its first emotionally intelligent voice AI.

Empathic Voice Interface, or EVI, can engage in conversations just like humans, understanding and expressing emotions based on the user’s tone of voice. It can interpret nuanced vocal modulations and generate empathetic responses, leading to many likening it to the next ‘ChatGPT moment’.

There's only 2 times I've seen an AI demo that genuinely blew me away.
The first was ChatGPT, the second was whatever @hume_ai just showed me. Holy fuck is this going to change everything

— Avi (@AviSchiffmann) February 1, 2024

While ChatGPT is currently limited to generating text responses without much understanding of sentiment, EVI steps in to fill this gap.

“ChatGPT is text only. We think the future of AI is a voice app, the voice is four times faster than text. The problem is that when we’re speaking we expect the AI to understand not just what we’re saying but how we’re saying it,” said Alan Cowen, Hume AI chief in a recent podcast.

However, OpenAI caught up soon. A day later, it unveiled Voice Engine, a model which can generate natural-sounding speech from text input and a mere 15-second audio sample. Notably, Voice Engine can create emotive and realistic voices using this brief audio input, which is similar to what EVI does.

While OpenAI’s Voice Engine is not publicly available yet, the EVI demo impressed everyone, leaving them eager for more, leading to the website reaching full capacity.

“Just tested Hume’s empathic voice, and it’s quite surprised, sad, embarrassed, perplexed, excited! It analyses my voice for many emotional categories and also generates multiple emotions and voices. It switches often as well!” wrote a user on X after trying EVI. Another user posted, “I shared the problems I am facing, and it felt like I am talking to a real person.”

How EVI Works

EVI is powered by an empathic large language model (eLLM), which understands and emulates tones of voice, word emphasis, and more to optimise human AI interaction.

It can understand human emotions such as amusement, anger, awkwardness, boredom, calmness, confusion, pain, and more. Interestingly, it can even catch lies. The company claims that it can detect 53 human emotions.

Hume observes that humans express a lot without words. Whether gasping in fear, sighing from tiredness, grunting with effort, or laughing in joy – these sounds, called ‘vocal bursts’, convey a range of emotions.

The company has gathered thousands of audio clips from people worldwide to determine the emotional meanings of vocal bursts, collecting data from over 16,000 individuals across the United States, China, India, South Africa, and Venezuela.

Hume isn’t stopping at voice alone. It plans to train its LLM to recognise various facial expressions worldwide, enabling the model to understand user emotions based on facial cues.

“What we’ve done at Hume is build models that understand expressions a lot better and we’ve integrated those into large language models. These models understand beyond language. What’s going on in the voice, what’s going on in facial expression… and it can learn from that,” said Cowen.

EVI in Action

EVI can serve as a perfect digital AI assistant that can uplift you on a bad day or provide calming support in stressful situations. It could be Samantha from the movie ‘Her’. The benefits of EVI extend far beyond personal use. It can be used in customer service to analyse a customer’s voice during calls or chats.

“There’s a lot of AI going into customer support right now. Some of our early design partners for this new API are people who want to take the automated customer support to make it a lot better,” said Cowen.

Lately, a lot of research has centred around creating humanoids. OpenAI recently partnered with Figure AI to build generative AI-powered humanoids.

In a recent video released by Figure, humanoid robot Figure 01 was seen holding a natural conversation perfectly with a human while passing him an apple. EVI can make conversations with humanoids more human-like.

Also, it’s a perfect tool for sentiment analysis for market forecasting and brand sentiment research. It can also be integrated into mental health apps, offering supportive and encouraging responses based on a user’s voice patterns, helping in clinical diagnosis.

EVI can personalise learning experiences for students, identifying moments of confusion or discouragement and offering additional support and explanation.

The Ugly Side of EVI

While Hume’s EVI is impressive with its emotional intelligence, some users might find its constant chatter a bit too much. It’s like the stranger at the bus stop who just wouldn’t shut up.

And just like with every technology, EVI too has the potential for misuse. Using its persuasive capabilities, it can manipulate individuals – for instance it can be used to market unethical products like drugs to teenagers. During election times, EVI could be misused to influence voter behaviour through targeted messaging and emotional manipulation.

Though EVI could give a big boost to the concept of AI partners, human-AI relationships may be viewed as unhealthy. “If you’re dealing with an AI girlfriend and spending more time with it than with humans, that’s going to be a negative for you,” said Cowen.

To keep its misuse in check, Hume supports ‘The Hume Initiative’, a nonprofit that works with experts to set ethical guidelines for using empathetic AI. The website lists unsupported use cases such as manipulation, deception, unbounded empathetic AI, and optimising for reduced well-being, which includes psychological warfare.

The post Hume AI’s Chatbot is A Chatty Stranger Who Never Shuts Up! appeared first on Analytics India Magazine.

How the New Breed of LLMs is Replacing OpenAI and the Likes

Of course, OpenAI, Mistral, Claude and the likes may adapt. But will they manage to stay competitive in this evolving market? Last week Databricks launched DBRX. It clearly shows the new trend: specialization, lightweight, combining multiple LLMs, enterprise-oriented, and better results at a fraction of the cost. Monolithic solutions where you pay by the token encourage the proliferation of models with billions or trillions of tokens, weights and parameters. They are embraced by companies such as Nvidia, because they use a lot of GPU and make chip producers wealthy. One of the drawbacks is the cost incurred by the customer, with no guarantee of positive ROI. The quality may also suffer (hallucinations).

In this article, I discuss the new type of architecture under development. Hallucination-free, they achieve better results at a fraction of the cost and run much faster. Sometimes without GPU, sometimes without training. Targeting professional users rather than the layman, they rely on self-tuning and customization. Indeed, there is no universal evaluation metric: laymen and experts have very different ratings and expectations when using these tools.

Much of this discussion is based on the technology that I develop for a fortune 100 company. I show the benefits, but also potential issues. Many of my competitors are moving in the same direction.

Questioning the GenAI Startup Funding Model

Before diving into the architecture of new LLMs, let’s first discuss the current funding model. Many startups get funding from large companies such as Microsoft, Nvidia or Amazon. It means that they have to use their cloud solutions, services and products. The result is high costs for the customer. Startups that rely on vendor-neutral VC funding face a similar challenge: you cannot raise VC money by saying that you could do better and charge 1000x less. VC firms expect to make billions of dollars, not mere millions. To maintain this ecosystem, players spend a lot of money on advertising and hype. In the end, if early investors can quickly make big money through acquisitions, it is a win. What happens when clients realize ROI is negative, is unimportant. As long as it does not happen too soon! But can investors even achieve this short-term goal?

The problem is compounded by the fact that researchers believe deep neural networks (DNN) are the panacea, with issues simply fixed by using bigger data, multiple transforms to make DNN work, or front-end patches such as prompt engineering, to address foundational back-end problems. Sadly, no one works on ground-breaking innovations outside DNNs. I am an exception.

In the end, very few self-funded entrepreneurs can compete, offering a far less expensive alternative with no plan on becoming a billionaire. I may be the only one able to survive and strive, long-term. My intellectual property is open-source, patent-free, and comes with extensive documentation, source code, and comparisons. It appeals to large, traditional corporations. The word is out; it is no longer a secret. In turn, it puts pressure on big players to offer better LLMs. They can see how I do it and implement the same algorithms on their end. Or come up with their own solutions independently. Either way, the new type of architecture is pretty much the same in all cases, not much different from mine. The new Databricks LLM (DBRX) epitomizes this trend. Mine is called XLLM.

Surprisingly, none of the startups working on new LLMs consider monetizing their products via advertising: blending organic output with sponsored results relevant to the user prompt. I am contemplating doing it, with a large client interested in signing-up when the option is available.

The New Breed of LLMs in One Picture

As concisely stated by one of my clients, the main issues to address are:

  • Latency
  • Accuracy and relevance
  • Cost
  • Liability (avoid data leakage, costly hallucinations)

In addition to blending specialized LLMs (one per top category with its own set of embeddings and other summary tables) a new trend is emerging. It consists of blending multiple LLMs focusing on the same topic, each one with its own flavor: technical, general, or based on different parameters. Then, combining these models just like XGBoost combines multiple small decisions trees to get the best from all. In short, an ensemble method.

xllmbenchmark
Figure 1: xLLM vs classic LLM

Note that speed and accuracy result from using many small, specialized tables (embeddings and so on) as opposed to a big table with long, fixed-size embedding vectors and expensive semantic / vector search. The user selects the categories that best match his prompt. In my case, there is no neural network involved, no GPU needed, yet no latency and no hallucinations. Liability is further reduced with a local implementation, and explainable AI.

Carefully selecting input sources (in many cases, corporate repositories augmented with external data) and smart crawling to reconstruct the hidden structure (underlying taxonomy, breadcrumbs, navigation links, headings, and so on), are critical components of this architecture.

xLLM-diagram
Figure 2: Extra tables in addition to embeddings, specific to one sub-LLM

For details about xLLM (technical implementation, comparing output with OpenAI and the likes on the same prompts, Python code, input sources, and documentation), see here. I also offer a free course on the topic, here.

Author

Towards Better GenAI: 5 Major Issues, and How to Fix Them

Vincent Granville is a pioneering GenAI scientist and machine learning expert, co-founder of Data Science Central (acquired by a publicly traded company in 2020), Chief AI Scientist at MLTechniques.com and GenAItechLab.com, former VC-funded executive, author (Elsevier) and patent owner — one related to LLM. Vincent’s past corporate experience includes Visa, Wells Fargo, eBay, NBC, Microsoft, and CNET. Follow Vincent on LinkedIn.

Women in AI: Brandie Nonnecke of UC Berkeley says investors should insist on responsible AI practices

Women in AI: Brandie Nonnecke of UC Berkeley says investors should insist on responsible AI practices Kyle Wiggers 7 hours

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Brandie Nonnecke is the founding director of the CITRIS Policy Lab, headquartered at UC Berkeley, which supports interdisciplinary research to address questions around the role of regulation in promoting innovation. Nonnecke also co-directors the Berkeley Center for Law and Technology, where she leads projects on AI, platforms and society, and the UC Berkeley AI Policy Hub, an initiative to train researchers to develop effective AI governance and policy frameworks.

In her spare time, Nonnecke hosts a video and podcast series, TecHype, that analyzes emerging tech policies, regulations and laws, providing insights into the benefits and risks and identifying strategies to harness tech for good.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I’ve been working in responsible AI governance for nearly a decade. My training in technology, public policy and their intersection with societal impacts drew me into the field. AI is already pervasive and profoundly impactful in our lives — for better and for worse. It’s important to me to meaningfully contribute to society’s ability to harness this technology for good rather than stand on the sidelines.

What work are you most proud of (in the AI field)?

I’m really proud of two things we’ve accomplished. First, The University of California was the first university to establish responsible AI principles and a governance structure to better ensure responsible procurement and use of AI. We take our commitment to serve the public in a responsible manner seriously. I had the honor of co-chairing the UC Presidential Working Group on AI and its subsequent permanent AI Council. In these roles, I’ve been able to gain firsthand experience thinking through how to best operationalize our responsible AI principles in order to safeguard our faculty, staff, students, and the broader communities we serve. Second, I think it’s critical that the public understand emerging technologies and their real benefits and risks. We launched TecHype, a video and podcast series that demystifies emerging technologies and provides guidance on effective technical and policy interventions.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

Be curious, persistent and undeterred by imposter syndrome. I’ve found it crucial to seek out mentors who support diversity and inclusion, and to offer the same support to others entering the field. Building inclusive communities in tech has been a powerful way to share experiences, advice and encouragement.

What advice would you give to women seeking to enter the AI field?

For women entering the AI field, my advice is threefold: Seek knowledge relentlessly, as AI is a rapidly evolving field. Embrace networking, as connections will open doors to opportunities and offer invaluable support. And advocate for yourself and others, as your voice is essential in shaping an inclusive, equitable future for AI. Remember, your unique perspectives and experiences enrich the field and drive innovation.

What are some of the most pressing issues facing AI as it evolves?

I believe one of the most pressing issues facing AI as it evolves is to not get hung up on the latest hype cycles. We’re seeing this now with generative AI. Sure, generative AI presents significant advancements and will have tremendous impact — good and bad. But other forms of machine learning are in use today that are surreptitiously making decisions that directly affect everyone’s ability to exercise their rights. Rather than focusing on the latest marvels of machine learning, it’s more important that we focus on how and where machine learning is being applied regardless of its technological prowess.

What are some issues AI users should be aware of?

AI users should be aware of issues related to data privacy and security, the potential for bias in AI decision-making and the importance of transparency in how AI systems operate and make decisions. Understanding these issues can empower users to demand more accountable and equitable AI systems.

What is the best way to responsibly build AI?

Responsibly building AI involves integrating ethical considerations at every stage of development and deployment. This includes diverse stakeholder engagement, transparent methodologies, bias management strategies and ongoing impact assessments. Prioritizing the public good and ensuring AI technologies are developed with human rights, fairness and inclusivity at their core are fundamental.

How can investors better push for responsible AI?

This is such an important question! For a long time we never expressly discussed the role of investors. I cannot express enough how impactful investors are! I believe the trope that “regulation stifles innovation” is overused and is often untrue. Instead, I firmly believe smaller firms can experience a late mover advantage and learn from the larger AI companies that have been developing responsible AI practices and the guidance emerging from academia, civil society and government. Investors have the power to shape the industry’s direction by making responsible AI practices a critical factor in their investment decisions. This includes supporting initiatives that focus on addressing social challenges through AI, promoting diversity and inclusion within the AI workforce and advocating for strong governance and technical strategies that help to ensure AI technologies benefit society as a whole.

How Databricks Came to Ola Krutrim’s Rescue

Ola’s Krutrim is brimming with mysterious energy as it prepares to announce a new product tomorrow. It could be a mobile application, the much-anticipated Krutrim Pro, or an enterprise line of products, or the announcement could be about something even more exciting.

While the source of excitement remains unknown, the credit most certainly goes to Databricks and the talented folks at Krutrim. A few days ago, Krutrim announced its partnership with Databricks to improve its foundational language model, particularly for Indian languages, aiming to enhance AI solutions in India.

“The Krutrim model was launched using our platform,” said Naveen Rao, VP of generative AI at Databricks, during an exclusive interview with AIM ahead of the release of the world’s most powerful open-source model, DBRX.

Further, he said the team custom-trained it on their own mixed Indian language dataset. “We have been working closely with the Databricks team to pre-train and fine-tune our foundational LLM,” said Ravi Jain, Krutrim VP.

Is Ola Krutrim Built on Existing Fine-tuned Models?

Let’s clear the air once and for all. Ola Krutrim has been quite obsessed with developing its own foundational model from scratch, despite rumours that it is being built on fine-tuned models such as Llama-2, Mistral, Claude-3 or even the most recent, DBRX.

Source: X

While that still remains a mystery, Rao said, “The Krutrim model was built on the dense model. It was not an MoE (mixture of experts); it was built off their data with our tooling and our model definition,” hinting that they are likely to use DBRX soon.

“This is actually a shortcut way as well,” said Rao. He said that tools and components (chips) are often too expensive due to currency differences. DBRX offers a solid base, allowing developers to build something capable with little data and cost. “It’s a great way to get going,” said Rao, when asked how it could benefit the Indian developer ecosystem.

Some of the early adopters of DBRX include Perplexity AI, You.com, Accenture and NVIDIA among others.

Foundational Models Struggle

Rao thinks that the vast majority of foundational model companies will fail. He says you cannot beat OpenAI without a differentiated use case.

He told AIM that OpenAI’s investments were ahead of demand and investing heavily without clear market demand is risky. “OpenAI might struggle with justifying investments without a demand. Companies copying them face even bigger challenges,” he added, citing Anthropic, which found a moat in building custom enterprise models.

“You’ve got to do something better than they do. And, if you don’t, and it’s cheap enough to move, then why would you use somebody else’s model? So it doesn’t make sense to me just to try to be ahead unless you can beat them,” he added.

Further, he said you must beat them in some other dimension if you can not beat them at the consumer use case. “We believe, value doesn’t reside in the model itself. That’s ephemeral. It’s the process of building and customising models,” added Rao.

Rao said that everyone has to have their take, but a vast majority of them just build models and call it a victory. “Woohoo! You built a model. Great,” he quipped. But he said that it will not work without differentiation or problem-solving.

“Just building a piece of technology because you said you can do it doesn’t really prove that you can solve a problem,” said Rao.

What about Krutrim?

Rao seems optimistic about Krutrim despite its recent mishaps and criticism. “This issue wasn’t exactly wrong, but rather incomplete,” said Rao.

He said that there is substantial data engineering work. “We provide the tools, but ultimately, it is how our clients use them. Post-training, a lot of effort goes into ensuring the responses are accurate,” said Rao.

He said that Krutrim took a risk to be one of the first with a native model, but more work is needed for refinement, such as supervised fine-tuning.

Solving AI for India is a complex problem. “A single setback doesn’t negate the entire effort,” said Rao. He said India is unique, with Hindi and English as overlay languages, alongside local languages; understanding and interacting in this rich linguistic environment is a significant challenge.

He believes that Krutrim is positioned better to address these challenges than any other player in the market. That also explains why the company won the GenAI Innovation Award for ‘using generative AI to transform their products, processes, and tools’, at the Data Intelligence Day held in Bengaluru.

At the event’s sideline, it was also revealed that both companies plan to create AI-powered products such as conversational assistants, content generation tools, and customised offerings across industries.

The post How Databricks Came to Ola Krutrim’s Rescue appeared first on Analytics India Magazine.

What to Expect at the ‘Absolutely Incredible’ Apple WWDC 2024

Apple WWDC 2024

Apple announced the date for the year’s most anticipated event – Worldwide Developers Conference (WWDC). The Cupertino giant will host the event online from June 10 through June 14, giving developers and students the opportunity to attend the opening day event in person in Apple Park. All eyes are on how the company will finally embrace AI at the upcoming event.

Embracing ‘AI’ not ‘ML’

“Mark your calendars for #WWDC24, June 10-14. It’s going to be Absolutely Incredible!” posted Apple SVP of marketing Greg Joswiak, and then started a surge of comments where users were quick to catch the subtle messaging.

Source: X

If the deduction is true, this is Apple’s way of embracing the word ‘AI’ at the developers’ conference, a stark difference from last year’s event where Tim Cook made sure that AI was never mentioned, instead ‘machine learning’ was embraced.

Generative AI Features for Apple Devices

In the last earnings call, CEO Tim Cook said that the company will continue to ‘spend a tremendous amount of time and effort’ on AI and technologies that will shape the future. “We’re excited to share the details of our ongoing work in that space later this year,” he said, possibly hinting at the upcoming WWDC.

With Apple’s recent push to bring generative AI capabilities into their devices, it is evident that these features will be unveiled at WWDC. With the release of MM1 paper, a new family of multimodal AI models, with the largest being a 30B parameter model, the focus is on how this would fit in the WWDC announcements.

Jason Snell, writer and former editor-in-chief at Macworld, highlighted in a recent podcast episode of MacBreak Weekly that Apple might not have its LLM ready for WWDC, but it will demonstrate AI capabilities utilising third-party models in a privacy-focused manner.

Interestingly, partnerships with competitor big tech companies are already in place.

Apple was recently in talks with Google to integrate its most powerful AI model, Gemini, into the iPhone. Apple is also partnering with Chinese tech giant Baidu to bring further GenAI features.

iOS Revamp for Cooler Features

The generative AI features are reported to be integrated onto the upcoming iOS 18 software. It is also predicted that the latest software would be a major update of the operating system. Apple had earlier described its upcoming OS as ‘ambitious and compelling’, with new features, designs and improved performance and security.

– iOS18 is expected to bring various new features to Apple’s in-built apps. Apple Music is set to have auto-generated playlists for users based on mood and interactions.

– Apps such as Pages and Keynotes will have AI-assisted writing features.

– Xcode, Apple’s code development platform, is also likely to get AI features to assist with coding.

– Apple Maps will likely see more features such as support for topographic maps and option to save custom routes.

– The new operating software might bring a change in the design and layout of the phone, such as customisable home screens.

WWDC24 is set to unveil these new features in the latest iOS, iPadOS, macOS, watchOS, and tvOS. It is also suggested that iOS 18 will be redesigned to match vision OS.

Advanced Apple Vision Pro

Apple’s legendary announcement at last year’s WWDC was hands-down their spatial computing device Apple Vision Pro.

As confirmed in the WWDC blog, the revolutionary mixed reality headset’s in-built capabilities that can be resonated with that of an autonomous vehicle, is expected to receive upgrades. It is likely that the problems reported with the maiden version of the headset may be addressed.

Apple Pencil is also being tested out as an accessory for the next version of visionOS. The pencil is currently compatible with iPads only.

Currently, sold in the US, Apple Vision Pro will likely be sold in other countries too, including China.

AI Chip Integration

Earlier this month, Apple announced the integration of M3 chip, Apple’s most powerful AI chip, on their latest 13- and 15-inch MacBook Air. However, the chip that was unveiled in October last year, is not yet available on Mac Studio and Mac Mini devices. It is likely that WWDC will see announcements around M3 chip integration on these products.

Further, the Apple A18 chip which is the forthcoming processor from Apple that is expected to be used in the iPhone 16 lineup, is said to be equipped with a powerful neural engine, and is reported to have a 6-core GPU. The A18 chip will allow for more powerful and efficient AI performance.

New and Improved Siri

Last year, it was reported that Apple’s voice assistant Siri will receive major upgrades this year. AI-enabled features will be integrated into Siri for improving conversation capabilities and user personalisation. Siri and other Apple apps such as Messages will be seamlessly connected for better response options.

The post What to Expect at the ‘Absolutely Incredible’ Apple WWDC 2024 appeared first on Analytics India Magazine.

Women in AI: Kate Devlin of King’s College is researching AI and intimacy

Women in AI: Kate Devlin of King’s College is researching AI and intimacy Kyle Wiggers 9 hours

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Kate Devlin is a lecturer in AI and society at King’s College London. The author of “Turned On: Science, Sex and Robots,” which examines the ethical and social implications of tech and intimacy, Devlin’s research investigates how people interact with and react to technologies — both past and future.

Devlin — who in 2016 ran the U.K.’s first sex tech hackathon — directs advocacy and engagement for the Trusted Autonomous Systems Hub, a collaborative platform to support the development of “socially beneficial” robotics and AI systems. She’s also a board member of the Open Rights Group, an organization that works to preserve digital rights and freedoms.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I started off as an archaeologist, eventually moving across disciplines and completing a Ph.D. in computer science in 2004. The idea was to integrate the subjects, but I ended up doing more and more on human-computer interaction, and on how people interact with AI and robots, including the reception that such technologies have.

What work are you most proud of (in the AI field)?

I’m pleased that intimacy and AI is now taken seriously as an academic area of study. There’s some amazing research going on. It used to be viewed as very niche and highly unlikely; now we’re seeing people forming meaningful relationships with chatbots — meaningful in that they really do mean something to those people.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

I don’t. We just persevere. It’s still shockingly sexist. And maybe I don’t want to “lean in”; maybe I want an environment that isn’t defined around macho qualities. I guess it’s a two-pronged thing: we need more women in visible, top positions, and we need to tackle sexism in schools and beyond. And then we need a systemic change to stop the “leaky pipeline” — we’re seeing an increase of women in AI and tech due to a rise in home working as it fits better with childcare which, let’s face it, still falls to us. Let’s have more flexibility until we don’t have to do the majority of that caring on our own.

What advice would you give to women seeking to enter the AI field?

You have the right to take up as much space as the men.

What are some of the most pressing issues facing AI as it evolves?

Responsibility. Accountability. There’s currently a fever pitch that hinges around technological determinism — as if we’re hurtling toward some dangerous future. We don’t have to be. It’s possible to reject that. It’s fine to prioritize a different path. Very few of the issues we face are new; it’s size and scale that are making this particularly tricky.

What are some issues AI users should be aware of?

Uh… late-stage capitalism.

More usefully: check provenance — where’s the data coming from? How ethical is the provider? Do they have a good track record of social responsibility? Would you let them control your oxygen supply on Mars?

What is the best way to responsibly build AI?

Regulation and conscience.

How can investors better push for responsible AI?

Thinking of this in purely business terms, you’ll have much happier customers if you care about people. We can see through ethics-washing so really make it matter. Hold the companies responsible for considering things like human rights, labor, sustainability and social impact in their AI supply chain.

This Week in AI: Let us not forget the humble data annotator

This Week in AI: Let us not forget the humble data annotator Kyle Wiggers 8 hours

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

This week in AI, I’d like to turn the spotlight on labeling and annotation startups — startups like Scale AI, which is reportedly in talks to raise new funds at a $13 billion valuation. Labeling and annotation platforms might not get the attention flashy new generative AI models like OpenAI’s Sora do. But they’re essential. Without them, modern AI models arguably wouldn’t exist.

The data on which many models train has to be labeled. Why? Labels, or tags, help the models understand and interpret data during the training process. For example, labels to train an image recognition model might take the form of markings around objects, “bounding boxes” or captions referring to each person, place or object depicted in an image.

The accuracy and quality of labels significantly impact the performance — and reliability — of the trained models. And annotation is a vast undertaking, requiring thousands to millions of labels for the larger and more sophisticated data sets in use.

So you’d think data annotators would be treated well, paid living wages and given the same benefits that the engineers building the models themselves enjoy. But often, the opposite is true — a product of the brutal working conditions that many annotation and labeling startups foster.

Companies with billions in the bank, like OpenAI, have relied on annotators in third-world countries paid only a few dollars per hour. Some of these annotators are exposed to highly disturbing content, like graphic imagery, yet aren’t given time off (as they’re usually contractors) or access to mental health resources.

Workers that made ChatGPT less harmful ask lawmakers to stem alleged exploitation by Big Tech

An excellent piece in NY Mag peels back the curtains on Scale AI in particular, which recruits annotators in countries as far-flung as Nairobi and Kenya. Some of the tasks on Scale AI take labelers multiple eight-hour workdays — no breaks — and pay as little as $10. And these workers are beholden to the whims of the platform. Annotators sometimes go long stretches without receiving work, or they’re unceremoniously booted off Scale AI — as happened to contractors in Thailand, Vietnam, Poland and Pakistan recently.

Some annotation and labeling platforms claim to provide “fair-trade” work. They’ve made it a central part of their branding in fact. But as MIT Tech Review’s Kate Kaye notes, there are no regulations, only weak industry standards for what ethical labeling work means — and companies’ own definitions vary widely.

So, what to do? Barring a massive technological breakthrough, the need to annotate and label data for AI training isn’t going away. We can hope that the platforms self-regulate, but the more realistic solution seems to be policymaking. That itself is a tricky prospect — but it’s the best shot we have, I’d argue, at changing things for the better. Or at least starting to.

Here are some other AI stories of note from the past few days:

    • OpenAI builds a voice cloner: OpenAI is previewing a new AI-powered tool it developed, Voice Engine, that enables users to clone a voice from a 15-second recording of someone speaking. But the company is choosing not to release it widely (yet), citing risks of misuse and abuse.
    • Amazon doubles down on Anthropic: Amazon has invested a further $2.75 billion in growing AI power Anthropic, following through on the option it left open last September.
    • Google.org launches an accelerator: Google.org, Google’s charitable wing, is launching a new $20 million, six-month program to help fund nonprofits developing tech that leverages generative AI.
    • A new model architecture: AI startup AI21 Labs has released a generative AI model, Jamba, that employs a novel, new(ish) model architecture — state space models, or SSMs — to improve efficiency.
    • Databricks launches DBRX: In other model news, Databricks this week released DBRX, a generative AI model akin to OpenAI’s GPT series and Google’s Gemini. The company claims it achieves state-of-the-art results on a number of popular AI benchmarks, including several measuring reasoning.
    • Uber Eats and UK AI regulation: Natasha writes about how an Uber Eats courier’s fight against AI bias shows that justice under the UK’s AI regulations is hard won.
    • EU election security guidance: The European Union published draft election security guidelines Tuesday aimed at the around two dozen platforms regulated under the Digital Services Act, including guidelines pertaining to preventing content recommendation algorithms from spreading generative AI-based disinformation (aka political deepfakes).
    • Grok gets upgraded: X’s Grok chatbot will soon get an upgraded underlying model, Grok-1.5 — at the same time all Premium subscribers on X will gain access to Grok. (Grok was previously exclusive to X Premium+ customers.)
    • Adobe expands Firefly: This week, Adobe unveiled Firefly Services, a set of more than 20 new generative and creative APIs, tools and services. It also launched Custom Models, which allows businesses to fine-tune Firefly models based on their assets — a part of Adobe’s new GenStudio suite.

More machine learnings

How’s the weather? AI is increasingly able to tell you this. I noted a few efforts in hourly, weekly, and century-scale forecasting a few months ago, but like all things AI, the field is moving fast. The teams behind MetNet-3 and GraphCast have published a paper describing a new system called SEEDS, for Scalable Ensemble Envelope Diffusion Sampler.

Animation showing how more predictions creates a more even distribution of weather predictions.

SEEDS uses diffusion to generate “ensembles” of plausible weather outcomes for an area based on the input (radar readings or orbital imagery perhaps) much faster than physics-based models. With bigger ensemble counts, they can cover more edge cases (like an event that only occurs in 1 out of 100 possible scenarios) and be more confident about more likely situations.

Fujitsu is also hoping to better understand the natural world by applying AI image handling techniques to underwater imagery and lidar data collected by underwater autonomous vehicles. Improving the quality of the imagery will let other, less sophisticated processes (like 3D conversion) work better on the target data.

Image Credits: Fujitsu

The idea is to build a “digital twin” of waters that can help simulate and predict new developments. We’re a long way off from that, but you gotta start somewhere.

Over among the LLMs, researchers have found that they mimic intelligence by an even simpler than expected method: linear functions. Frankly the math is beyond me (vector stuff in many dimensions) but this writeup at MIT makes it pretty clear that the recall mechanism of these models is pretty… basic.

Even though these models are really complicated, nonlinear functions that are trained on lots of data and are very hard to understand, there are sometimes really simple mechanisms working inside them. This is one instance of that,” said co-lead author Evan Hernandez. If you’re more technically minded, check out the paper here.

One way these models can fail is not understanding context or feedback. Even a really capable LLM might not “get it” if you tell it your name is pronounced a certain way, since they don’t actually know or understand anything. In cases where that might be important, like human-robot interactions, it could put people off if the robot acts that way.

Disney Research has been looking into automated character interactions for a long time, and this name pronunciation and reuse paper just showed up a little while back. It seems obvious, but extracting the phonemes when someone introduces themselves and encoding that rather than just the written name is a smart approach.

Image Credits: Disney Research

Lastly, as AI and search overlap more and more, it’s worth reassessing how these tools are used and whether there are any new risks presented by this unholy union. Safiya Umoja Noble has been an important voice in AI and search ethics for years, and her opinion is always enlightening. She did a nice interview with the UCLA news team about how her work has evolved and why we need to stay frosty when it comes to bias and bad habits in search.

Why it’s impossible to review AIs, and why TechCrunch is doing it anyway

Zoho’s ManageEngine Invests $10 Mn in NVIDIA, Intel, and AMD GPUs

ManageEngine Zoho

ManageEngine, the enterprise IT management division of Zoho Corporation, has been accelerating its infrastructure development exponentially to unleash generative AI offerings for its customers.
The company recently invested nearly $10 million in procuring GPUs from all three majors. “We are working closely with Intel, NVIDIA, and AMD,” said Shailesh Kumar Davey, co-founder and VP of engineering of ManageEngine, in an exclusive interview with AIM.

“Considering NVIDIA’s dominance in the GPU market, we’ve allocated nearly $10 million for infrastructure investments, primarily in NVIDIA products, though we’ve also included Intel and AMD,” shared Davey.

Zoho chief Sridhar Vembu expressed a similar sentiment on the sidelines of Zoholics in October last year when he discussed the short supply of NVIDIA GPUs and the six-month wait period.

The wait is almost over. Recently, Yotta became India’s first company to receive the NVIDIA H100 GPUs, with Blackwell in the pipeline.

Hopefully, Zoho’s ManageEngine will also get its NVIDIA GPUs really soon.

Bets Big on Small and Medium Language Models

“Whenever we discuss transformer models, we often think of GPT as a prime example. This leads to the assumption that a large-scale model like GPT is always necessary. However, that’s not the case,” said Davey, advocating for smaller, more cost-effective models tailored to specific needs.

Further, he said that ManageEngine is looking at deploying smaller language models (SLMs) and medium language models (MLMs)—the likes of Llama 2, Mistral, Claude 3, and now DBRX. “We will deploy them in the cloud, and some of our tools are already adopted,” he added.

He said that for large language models, the company has already partnered with companies such as OpenAI and Anthropic. In the coming months, it is also planning to build its own SLMs and MLMs.

Going beyond the LLM hype, ManageEngage has been one of the early adopters of AI/ML. “Since 2011, we have incorporated AI/ML across our solutions, including IT operations, IT service management, IT security, and endpoint management. We offer nearly 50 different solutions to meet the needs of IT and enterprise IT teams,” he added.

Eyes India for Growth

For nearly two decades, ManageEngine has been a significant player in the tech industry. Initially, the United States was its largest market, but India has been rapidly gaining ground.

The surge in ManageEngine’s growth in India can be attributed to increased investment in IT infrastructure by sectors such as Banking, Financial Services and Insurance (BFSI), retail, manufacturing, and government. These sectors are the company’s primary markets in India.

Davey said that the company’s focus on IT infrastructure in these areas has resulted in significant growth in India. He pointed out that just six years ago, the country ranked as the seventh or eighth largest market for ManageEngine.

Now, it is their second go-to market for growth.

“In the past four to five years, there has been a dramatic change. India is now our number two market, growing at 40%. In that, the cloud-based solutions are growing at 70%,” shared Davey.

With UPI-like models mushrooming in Dubai and Saudi Arabia, he believes that these countries will also be digitising themselves.

Davey said that there is a clear dichotomy in market growth: while developed countries are experiencing slower growth, emerging markets are rapidly expanding and leading the way in technological advancements.

Towards Data Sovereignty

At present, the company provides both on-prem and cloud services and has two data centres in India—one in Chennai and another in Mumbai. “Data sovereignty is taken care of, and all our solutions are hosted in these two, catering to the needs of the Indian market,” said Davey.

ManageEngine has 18 data centres and close to 100 POPs (point of presence), which are small one or two-rack solutions worldwide. They recently opened data centres in Canada and Saudi Arabia, and soon in the UAE.

Data sovereignty is in its DNA. Nearly two decades ago, the company began developing its own hosting methods. “One decision we made in 2005 was to avoid using hyperscalers like AWS or Azure or GCP for hosting. We thought this could become a key competitive advantage,” said Davey, saying that people are now recognising more limitations in cloud services, leading to a shift away from them.

Very early on, the company realised the importance of having engineering control over the entire stack—from the data centre it operates into the solutions deployed and how they are optimised.

“We believe that optimising everything from the software to the server and the data centre can provide significant value to our customers,” said Davey

ManageEngine is surely the “bread and butter” of Zoho, as it sets to cross $1 billion in revenue in a couple of years.

The post Zoho’s ManageEngine Invests $10 Mn in NVIDIA, Intel, and AMD GPUs appeared first on Analytics India Magazine.