Smartphones will be obsolete in 10 years, says Meta’s AI Chief  

Smartphones will be obsolete in 10 years, says Meta Chief

Chief AI Scientist at Meta, Yann LeCun said that in the next 10-15 years we won’t have smartphones, and will be using augmented reality glasses and bracelets to interact with intelligent assistants.

“The last thing we might want is intelligent virtual assistants that help us in our daily lives. So today all of us here are carrying a smartphone in our pockets, 10 years from now or 15 years from now we’re not going to have smartphones anymore, we’re going to have augmented reality glasses”, said LeCun.

In 2022, Nokia CEO Pekka Lundmark predicted that smartphones may not stay relevant in 2030. During the World Economic Forum, Lundmark said, “By then, definitely the smartphone as we know it today will no longer be the most common interface. Many of these things will be built directly into our bodies.”

As technology evolves, innovations like the Limitless AI Pendant and WIZPR Ring are launched to harness the power of AI and interact with large language models.

While Humane AI spent the last year making the case that the Ai Pin will replace smartphones, allowing people to interact more with the real world, it received a mixed reaction.

But by striking a balance between wearables, AI, and data privacy, the full potential of these technologies can be unlocked while prioritising responsible and ethical handling of personal data.

Now, next in line is Elon Musk’s Neuralink, which is working on producing electronic devices that can be implanted into the brain and used for communication with machines and other individuals. They look forward to making it as common as a smartphone, further opening up a world of possibilities for both medical and technological advancements.

The post Smartphones will be obsolete in 10 years, says Meta’s AI Chief appeared first on Analytics India Magazine.

AI to Replace Annoying Colleagues Who Complain About Your Tea Breaks to the Boss

AI to Replace Annoying Colleagues Who Complain About Your Tea Breaks to the Boss

Recently, a video showcased how Chinese workplaces are deploying AI-powered cameras to track employees’ activities and take note of their productivity and break times.

This will soon be a reality at your workplace too, in the process, eliminating that annoying colleague who often snitches about your coffee/tea breaks.

Research conducted by Top10VPN, a virtual private network comparison site, indicated a 54% increase in the demand for employee surveillance software since 2019. This surge indicates employers’ quiet dependence on AI monitoring tools to gain insights into employee performance and business operations.

Amid the COVID-19 pandemic, as employees returned to work under advised pandemic protocols, machine learning emerged as a tool to monitor adherence to social distancing measures at the workplace.

Innovations like Landing AI, a startup offering workplace monitoring tools, were developed to issue alerts when individuals failed to maintain the recommended distance from their colleagues.

Working on similar lines, Bharat Aluminium Company Limited (BALCO), India’s aluminum producer and a subsidiary of Vedanta Aluminium, introduced an innovative solution called T-Pulse Health, Safety, Security and Environment (HSSE) Monitoring System, which leverages AI-based technology to enhance workplace safety.

Software like We360.ai, an employee monitoring system, claims to enhance employees’ productivity. It mentioned that Vakilsearch, ACL Digital, Patanjali, Cogent Infotech, and The Knowledge Academy are some of their clients utilising the AI tracker.

The Dark Side of AI Surveillance

From algorithms autonomously screening job applications without human intervention to monitoring software meticulously tracking coffee breaks, AI appears to be an unsettling presence in the workplace.

A primary concern revolves around the collection and utilisation of employee data for surveillance purposes. A CNBC article indicates that companies like Walmart, Delta Air Lines, T-Mobile, Chevron, and Starbucks have engaged a startup AI firm to monitor employee communications.

European brands like Nestle and AstraZeneca have similarly employed the services of AI firm Aware, which uses dozens of models built to read text and process messages.

Jeff Schumann, co-founder and CEO of Columbus, an Ohio-based startup, explained to CNBC that Aware’s analytics tool, crafted to track employee sentiment and toxicity, emphasises privacy by refraining from flagging individual employee names.

However, in instances of severe threats or predetermined risk behaviours, the company’s separate eDiscovery tool can reveal the employees’ names.

Earlier in 2019, a study conducted by the Pew Research Center found that 64% of Americans are concerned about the impact of AI and automation on job security and privacy.

Furthermore, the use of AI in recruitment and talent management processes has sparked debates about fairness, bias, and discrimination. AI algorithms trained on historical data may perpetuate existing biases and prejudices, leading to discriminatory outcomes in hiring and promotion decisions.

Without proper oversight and safeguards, AI-driven systems could exacerbate inequalities and undermine diversity efforts in the workplace.

Workplace Surveillance, A New Normal?

In India, industrial accidents claim thousands of lives annually. As per the government data, an average of three workers die each day due to insufficient safety measures in factories.

Recognising this urgent need for change, integrating AI surveillance emerges as a promising solution to mitigate workplace incidents and injuries. By providing immediate alerts to authorities, managers can swiftly respond in real time, potentially averting disasters.

In this era of innovation, we must embrace the potential of AI technology to enhance safety standards. Let’s envision a future where AI systems act as vigilant guardians, alerting managers when employees require a break from excessive screen time or exhibit signs of stress from prolonged work hours.

This approach promises to boost organisational efficiency while simultaneously prioritising employee well-being.

Furthermore, addressing the issue of suicides, particularly affecting the working-age population, AI monitoring at the workplace could offer valuable insights. By detecting any inappropriate or distressing behaviours among colleagues, managers can intervene promptly, potentially preventing further tragedies.

The post AI to Replace Annoying Colleagues Who Complain About Your Tea Breaks to the Boss appeared first on Analytics India Magazine.

Why Moderna Partnered with OpenAI 

“People literally talk about how AI is going to cure diseases someday, and I think this is a very meaningful first step,” said CEO Sam Altman about OpenAI’s partnership with Moderna. The hottest AI startup has recently partnered with the pharmaceutical and biotechnology company to develop mRNA medicines.

We’re excited to announce our ongoing collaboration with @OpenAI to co-innovate with a shared vision of AI’s immense potential in the future of business and healthcare. #AI and collaborations like this will be a key component to our ability to scale and maximize our impact on… pic.twitter.com/MOylzcErDc

— Moderna (@moderna_tx) April 24, 2024

As part of the deal, approximately 3,000 Moderna employees will gain access to ChatGPT Enterprise, developed on OpenAI’s GPT-4. Moderna plans to use ChatGPT Enterprise for mRNA medicine development to launch up to 15 new products in the next five years, including a vaccine for RSV and personalised cancer treatments.

Interestingly, Moderna was one of the first customers of OpenAI’s ChatGPT Enterprise when it was launched last year. Prior to that, Moderna launched mChat in 2023, their own instance of ChatGPT built on top of OpenAI’s API.

Till date, the employees at Moderna have created over 750 GPTs, designed to support specific tasks or processes across the business. Some of these GPTs assist in selecting optimal doses for clinical trials and drafting responses to regulatory questions.

Moderna x ChatGPT Enterprise

One of the solutions that Moderna has built and is actively developing and validating with ChatGPT Enterprise is a pilot program called Dose ID. It has the ability to review and analyse clinical data, integrating and visualising large datasets. Dose ID is designed to assist the clinical study team in data analysis, enhancing their clinical judgment and decision-making.

Apart from Dose ID, there is another GPT called Contract Companion GPT, which helps the legal team at Moderna get a clear, readable summary of a contract. The Policy Bot GPT helps employees get quick answers about internal policies without needing to search through hundreds of documents.

Moderna’s corporate brand team has also found many ways to take advantage of ChatGPT Enterprise. They have a GPT that helps prepare slides for quarterly earnings calls, and another that helps convert biotech terminology into approachable language for investor communications.

Moderna is not Alone

Last year, during AWS re:Invent, Pfizer announced that they are using generative AI for drug discovery. Pfizer developed a new generative AI platform Charlie, named after the pharmaceutical giant’s founder.

“We are leveraging generative AI, which is estimated to deliver annual cost savings of $750 million to $1 billion in the near term—a real tangible value,” said Lidia Fonseca, chief digital and technology officer at Pfizer.

She added that using AWS cloud services, Pfizer rapidly deployed Vox, an internal generative AI platform, enabling colleagues to access LLMs available in Amazon Bedrock and SageMaker.

“A variety of LLMs in Bedrock means we can select the best tools for use cases in R&D, manufacturing, marketing, and more, enabling Pfizer and AWS to prototype 17 different use cases in a matter of weeks.”

“AI and generative AI will help us identify new oncology targets, a process that is largely manual today. With AI, we can search and collate relevant data and scientific content from many more sources in a fraction of the time,” she added.

Meanwhile, Novartis has partnered with Isomorphic Labs. “We announced a partnership with Isomorphic Labs, which is a spin-out of DeepMind from Google, to also see how we can speed up our ability to drug new potential targets for new medicines,” said Novartis CEO Vasant Narasimhan.

“AI is going to impact many of our productivity efforts in drug development, like how fast can we generate new trial protocols, how fast can we work with regulators, how fast can we look at patient safety, and look at large patient data sets,” he added.

Similarly, AstraZeneca partnered with the US AI biologics firm Absci to design an antibody to fight cancer.

Last year, Google introduced Med-PaLM 2, an LLM fine-tuned for healthcare. Recently, the tech giant introduced MedLM for chest X-ray, which simplifies radiology workflows by assisting with the classification of chest X-rays for a variety of use cases.

With pharmaceutical companies partnering with generative AI startups and firms, the future of medicines and drug discovery is set to become more efficient and cost-effective.

The post Why Moderna Partnered with OpenAI appeared first on Analytics India Magazine.

Elon Musk Ditches India for China

Elon Musk

Tesla CEO Elon Musk made a surprise visit to China on Sunday, meeting with government officials to discuss the rollout of Tesla’s Full Self-Driving (FSD) software in the country. The trip comes just a week after Musk abruptly postponed a planned visit to India for high-level investment talks.

During his meeting with Musk, Chinese Premier Li Qiang praised Tesla’s growth in China, calling it “a successful example of Sino-U.S. economic and trade cooperation,” according to Chinese state media. Musk is seeking regulatory approval to introduce FSD, which is not yet capable of fully autonomous driving, in key markets like China.

A major hurdle for Tesla has been obtaining permission to transfer FSD data collected in China to the United States for training its algorithms. However, the China Association of Automobile Manufacturers announced that Tesla has cleared Beijing’s data security tests, and local authorities have lifted restrictions on Tesla vehicles in certain areas.

Musk’s visit coincides with the Beijing Auto Show, where rival automakers are showcasing new models with advanced driver-assistance features. Tesla, which has no new EVs to display, is not participating in the event. The company has been facing intense competition and has resorted to aggressive pricing, including recent price cuts earlier this month.

Last week, Musk abruptly cancelled a scheduled visit to New Delhi, citing “very heavy Tesla obligations.” He was expected to meet with Prime Minister Narendra Modi and announce a significant investment in India, potentially including a Tesla factory. An India factory would allow Tesla to enter the huge potential market while bypassing heavy import duties, but a low-cost Tesla EV would be crucial to tap into the country’s market, which has limited EV charging infrastructure.

Tesla’s stock surged 14.4% last week to $168.29, rebounding from a 15-month low despite the company’s Q1 earnings per share missing lowered expectations. Investors were encouraged by Tesla’s plans for more affordable EVs and Musk’s optimistic predictions for higher deliveries in 2024.

The post Elon Musk Ditches India for China appeared first on Analytics India Magazine.

Yotta Partners with BLC to Build Nepal’s First Supercloud Data Centre

India’s Yotta Data Services and Nepal’s BLC Holding have partnered to build Nepal’s first supercloud data centre facility called “K1” in Ramkot near the capital Kathmandu.

The multi-million K1 facility will offer up to 4MW critical IT load capacity, spread across 3 acres and 60,000 sq ft area. Located within 20 km of Kathmandu airport, it will provide cloud, managed IT and cybersecurity services to store and process data, AI models and enterprise applications.

The joint venture leverages BLC’s understanding of the local landscape, regulations and customer relationships, combined with Yotta’s global expertise, technology standards and access to wider markets, including hyperscalers. “Yotta’s expertise in managing data centres at the utmost standards and its dedication to constructing a hyperscale platform grounded in accountable, reliable, and reproducible methodologies resonates harmoniously with BLC’s proven track record in local operations,” said Sunil Gupta, Co-Founder & CEO, Yotta Data Services.

In addition to the core data centre services, K1 will offer Yotta’s cloud platforms, such as Shakti Cloud for AI/HPC and Yntraa hyperscale cloud, and managed IT and cybersecurity solutions. Built to Tier III standards, K1 will have a modular design for high uptime, reliability, scalability and flexibility. It will meet ISO 14000, 50001 for energy efficiency, ISO 27000, and PCI-DSS for security, with dual high-voltage power substations and carrier-neutral dense connectivity.

The facility aims to enable AI/ML development, generate jobs from construction to high-tech operations, and address data sovereignty concerns by enabling local control and participation. “The proposed K1 facility will not just be a data centre; we’re creating an ICT ecosystem that drives local and global growth,” said Megha Chaudhary, Managing Director, BLC. “By partnering with Yotta, we are strategically positioned to meet the volume and scale requirements while simultaneously delivering the premium, super-high availability needs of hyperscalers, enterprises, and the government.”

This marks Yotta’s expansion into Nepal after its hyperscale campuses in Navi Mumbai and Greater Noida, India, as well as an upcoming project in Bangladesh’s Dhaka Hyperscale Data Center Park. The K1 data centre is expected to be completed within the next 24 months.

The post Yotta Partners with BLC to Build Nepal’s First Supercloud Data Centre appeared first on Analytics India Magazine.

How RPA vendors aim to remain relevant in a world of AI agents

How RPA vendors aim to remain relevant in a world of AI agents Kyle Wiggers 8 hours

What’s the next big thing in enterprise automation? If you ask the tech giants, it’s agents — driven by generative AI.

There’s no universally accepted definition of agent, but these days the term is used to describe generative AI-powered tools that can perform complex tasks through human-like interactions across software and web platforms.

For example, an agent could create an itinerary by filling in a customer’s info on airlines’ and hotel chains’ websites. Or an agent could order the least expensive ride-hailing service to a location by automatically comparing prices across apps.

Vendors sense opportunity. ChatGPT maker OpenAI is reportedly deep into developing AI agent systems. And Google demoed a slew of agent-like products at its annual Cloud Next conference in early April.

“Companies should start preparing for wide-scale adoption of autonomous agents today,” analysts at Boston Consulting Group wrote recently in a report — citing experts who estimate that autonomous agents will go mainstream in three to five years.

Old-school automation

So where does that leave RPA?

Robotic process automation (RPA) came into vogue over a decade ago as enterprises turned to the tech to bolster their digital transformation efforts while reducing costs. Like an agent, RPA drives workflow automation. But it’s a much more rigid form, based on “if-then” preset rules for processes that can be broken down into strictly defined, discretized steps.

“RPA can mimic human actions, such as clicking, typing or copying and pasting, to perform tasks faster and more accurately than humans,” Saikat Ray, VP analyst at Gartner, explained to TechCrunch in an interview. “However, RPA bots have limitations when it comes to handling complex, creative or dynamic tasks that require natural language processing or reasoning skills.”

This rigidity makes RPA expensive to build — and considerably limits its applicability.

A 2022 survey from Robocorp, an RPA vendor, finds that of the organizations that say they’ve adopted RPA, 69% experience broken automation workflows at least once a week — many of which take hours to fix. Entire businesses have been made out of helping enterprises manage their RPA installations and prevent them from breaking.

RPA vendors aren’t naive. They’re well aware of the challenges — and believe that generative AI could solve many of them without hastening their platforms’ demise. In RPA vendors’ minds, RPA and generative AI-powered agents can peacefully co-exist — and perhaps one day even grow to complement each other.

Generative AI automation

UiPath, one of the larger players in the RPA market with an estimated 10,000+ customers, including Uber, Xerox and CrowdStrike, recently announced new generative AI features focused on document and message processing, as well as taking automated actions to deliver what UiPath CEO Bob Enslin calls “one-click digital transformation.”

“These features provide customers generative AI models that are trained for their specific tasks,” Enslin told TechCrunch. “Our generative AI powers workloads such as text completion for emails, categorization, image detection, language translation, the ability to filter out personally identifiable information [and] quickly answering any people-topic-related questions based off of knowledge from internal data.”

One of UiPath’s more recent explorations in the generative AI domain is Clipboard AI, which combines UiPath’s platform with third-party models from OpenAI, Google and others to — as Enslin puts it — “bring the power of automation to anyone that has to copy/paste.” Clipboard AI lets users highlight data from a form, and — leveraging generative AI to figure out the right places for the copied data to go — point it to another form, app, spreadsheet or database.

UiPath Clipboard AI

Image Credits: UiPath

“UiPath sees the need to bring action and AI together; this is where value is created,” Enslin said. “We believe the best performance will come from those that combine generative AI and human judgment — what we call human-in-the-loop — across end-to-end processes.”

Automation Anywhere, UiPath’s main rival, is also attempting to fold generative AI into its RPA technologies.

Last year, Automation Anywhere launched generative AI-powered tools to create workflows from natural language, summarize content, extract data from documents and — perhaps most significantly — adapt to changes in apps that would normally cause an RPA automation to fail.

“[Our generative AI models are] developed on top of [open] large language models and trained with anonymized metadata from more than 150 million automation processes across thousands of enterprise applications,” Peter White, SVP of enterprise AI and automation at Automation Anywhere, told TechCrunch. “We continue to build custom machine learning models for specific tasks within our platform and are also now building customized models on top of foundational generative AI models using our automation datasets.”

Next-gen RPA

Ray notes it’s important to be cognizant of generative AI’s limitations — namely biases and hallucinations — as it powers a growing number of RPA capabilities. But, risks aside, he believes generative AI stands to add value to RPA by transforming the way these platforms work and “creating new possibilities for automation.”

“Generative AI is a powerful technology that can enhance the capabilities of RPA platforms enabling them to understand and generate natural language, automate content creation, improve decision-making and even generate code,” Ray said. “By integrating generative AI models, RPA platforms can offer more value to their customers, increase their productivity and efficiency and expand their use cases and applications.”

Craig Le Clair, principal analyst at Forrester, sees RPA platforms as being ripe for expansion to support autonomous agents and generative AI as their use cases grow. In fact, he anticipates RPA platforms morphing into all-around toolsets for automation — toolsets that help deploy RPA in addition to related generative AI technologies.

“RPA platforms have the architecture to manage thousands of task automations and this bodes well for central management of AI agents,” he said. “Thousands of companies are well established with RPA platforms and will be open to using them for generative AI-infused agents. RPA has grown in part thanks to its ability to integrate easily with existing work patterns, through UI integration, and this will remain valuable for more intelligent agents going forward.”

UiPath is already beginning to take steps in this direction with a new capability, Context Grounding, that entered preview earlier in the month. As Enslin explained it to me, Context Grounding is designed to improve the accuracy of generative AI models — both first- and third-party — by converting business data those models might draw on into an “optimized” format that’s easier to index and search.

“Context Grounding extracts information from company-specific datasets, like a knowledge base or internal policies and procedures, to create more accurate and insightful responses,” Enslin said.

If there’s anything holding RPA vendors back, it’s the ever-present temptation to lock customers in, Le Clair said. He stressed the need for platforms to “remain agnostic” and offer tools that can be configured to work with a range of current — and future — enterprise systems and workflows.

To that, Enslin pledged that UiPath will remain “open, flexible and responsible.”

“The future of AI will require a combination of specialized AI with generative AI,” he continued. “We want customers to be able to confidently use all kinds of AI.”

White didn’t commit to neutrality exactly. But he emphasized that Automation Anywhere’s roadmap is being heavily shaped by customer feedback.

“What we hear from every customer, across every industry, is that their ability to incorporate automation in many more use cases has increased exponentially with generative AI,” he said. “With generative AI infused into intelligent automation technologies like RPA, we see the potential for organizations to reduce operating costs and increase productivity. Companies who fail to adopt these technologies will struggle to compete against others who embrace generative AI and automation.”

MKBHD Gets Gaslighted by Rabbit R1

After ripping Hume Ai Pin apart, MKBHD (Marques Brownlee) now gets gaslighted by Rabbit R1. “My favorite new AI feature: gaslighting,” he quipped.

Jesse Lyu, the founder and CEO of Rabbit, quickly clarified and said: “we are aware of this bug. This is related to the time zone bug and it’ll get fixed via OTA before next Tuesday,battery performance improvements too.”

My favorite new AI feature: gaslighting pic.twitter.com/Yb43gB3EkA

— Marques Brownlee (@MKBHD) April 26, 2024

MKBHD’s recent interaction with Rabbit r1 says a lot about how users interact with AI systems. It goes beyond mere product or gadget reviews, bringing us to question whether AI has emotions and can even control users to theextent of gaslighting someone.

“…it’s just an AI that doesn’t actually have feelings but maybe it makes you think it does,” said Alan Cowen, the founder and CEO of Hume AI, in a recent interview on Every.

Enters Empathetic AI

“I think understanding people’s emotional reactions is really key to learning how to satisfy people’s preferences,” said Cowen, introducing the world’s first empathetic AI, EVI.

“If you’re confused it can clarify things and if you’re excited it can kind of build on that excitement and if you’re frustrated it can be conciliatory,” said Cowen.

Most of the time, it also comes down to user experience and how AI systems interact with its users. “In a customer service call we can predict when somebody’s having a good customer service call… with like 99% accuracy sometimes depending on the context versus with language alone it’s like 80%,” added Cowen.

Founded in 2021, Hume AI is a research lab and technology company founded by a former researcher at U.C. Berkeley and Google, Cowen. The company is on a mission to ensure that AI is built to serve human goals and emotional well-being.

Cowen believes that voice interfaces will soon be the default way we interact with AI. He said that speech is 4x faster than typing; and it frees up the eyes and hands; and carries more information in its tune, rhythm, and timbre.

“That’s why we built the first AI with emotional intelligence to understand the voice beyond words. Based on your voice, it can better predict when to speak, what to say, and how to say it,” he added.

Recently, the company raised a $50 million Series B funding from EQT Group, Union Square Ventures, Nat Friedman, Daniel Gross, Northwell Holdings, Comcast Ventures, LG Technology Ventures, and Metaplanet.

AIM also tested out EVI, and there is nothing like it.

EVI API is Finally Here!

The company recently unveiled the Empathic Voice Interface (EVI) API, marking the debut of the first emotionally intelligent voice AI API. EVI is now available, offering the ability to receive live audio input and provide both generated audio and transcripts enriched with indicators of vocal expression.

With 100K conversations at a 10-minute average conversation length and 3 million user messages, EVI introduces innovative features, including discerning appropriate speaking times and crafting empathetic language with precisely the right tone.

The team said that EVI can be configured as per customer requirements, saying that it now comes with the ability to alter personality, response style, and speech content. The platform also supports Fireworks Mixtral8x7b, alongside OpenAI, and Anthropic models.

In addition to this, users can connect to their WebSocket to build their own text generation server to determine all EVI messages in a conversation. They can also use EVI’s voice by sending texts to be spoken to their APIs.

“Our AI’s strength lies in empowering others through its toolset. Our API is key; it enables users to tailor their experiences and integrate basic tools like web search. It’s about enabling customisation and fostering collaboration, with developers building upon our interface and incorporating user personalisations,” said Cowen.

What’s Next?

Many experts believe that AI systems that understand emotional intelligence are the future. Hume AI is perfectly positioned to revolutionize how users interact with AI systems.

“In the future, you’re going to want to be able to talk to it in crowded places, and you’re also going to want to have it understand your tone of voice in addition to your facial expression so it knows when you’re done speaking and how you’re feeling,” said Cowin, talking about enabling seamless multimodal interaction.

Further, he emphasized the importance of personalisation in AI communication tools to make them more adaptable and personable. This is crucial for applications where AI interacts directly with users, such as customer service, therapy, or educational tools.

“I think customizing the voice is really important, and the personalities, a lot of it you can do with the prompt obviously; you can’t change the underlying accents and voice quality of the voice, so we’re adding more voices too.

The post MKBHD Gets Gaslighted by Rabbit R1 appeared first on Analytics India Magazine.

Zerodha CTO Warns Companies to Not Look at AI as a Solution Chasing a Problem

Zerodha CTO Kailash Nadh

Think of generative AI, and you have close to 83 percent of companies jumping on the bandwagon, claiming it to be a top priority in their business plans. But, India’s biggest brokerage firm, Zerodha, seems to be viewing this trend cautiously and in a much more meaningful way, not giving in to the AI hype.

“You can’t just unleash these technologies just because you’re excited by it,” said CTO of Zerodha, Kailash Nadh, in an exclusive interaction with AIM. “It should not really be looked at as a solution chasing a problem.”

Race for AI Implementation

Speaking about how AI has been cautiously leveraged at Zerodha, Nadh highlighted the notion of how companies are mostly force fitting AI.

“It’s only been a year, and it shouldn’t be like, oh, there’s an LLM, where do we use it? When you stumble upon a problem that requires an LLM or its capabilities is when you use it,” he said.

Nadh believes that it’s too early in the AI cycle and that there’s a lot of hype and frenzy around it. He references early implementations when ChatGPT was released, and two months later, with its API also being released, everybody started using it.

“Such implementations tend to not be very well thought out. So, this requires more time for us to see better model integration. I’m not talking about LLMs alone, but all kinds of AI/ML technologies that are emerging.”

AI at Zerodha

Zerodha’s adoption of AI has been more of a ‘slow and steady’ kind. They were one of the first companies to even come out with an internal AI policy, to safeguard the jobs of their employees.

“When we instituted [AI policy] it really helped calm down people. A lot of people were really worried, and we have been extremely careful about integrations,” said Nadh.

The company is experimenting with self-hosted open source models to analyse the semantic content of user queries and interactions with the support team, to help classify and address urgent concerns on priority.

“It’s just one R&D project and we’re doing a bunch of very similar things, but you’d notice that these are all things that complement our existing workflows. We’re not trying to replace or outright apply AI or introduce chatbots. We don’t even have an AI chatbot on a support portal. We are being very very cautious and careful and we’re trying to improve the quality of life for the people here using these models and technologies,” said Nadh.

Proponent of Open Source Ecosystem

While AI implementation takes its own course at Zerodha, the company has been a strong advocate and user of open source softwares from the beginning.

Zerodha has built their entire tech infrastructure on open and free open-source softwares that were available and deemed the best in 2013.

“We’ve had a very first-principles, open-source-based, self-hosted, we-need-to-own-the-tech-piece, sort of view from the get-go,” said Nadh.

Zerodha has been, and still are, heavy users of the Python programming language. The company has used PostgreSQL as a database which Nadh believes has served them ‘really well’ over the last decade. Further, they have used Redis, one of the most popular players in memory databases, and also adopted the Go programming language which was in its infancy in 2014.

“It has really served us well and has played a huge part in us being able to scale and build so much with a small tech team,” he said.

“Most companies that have been incorporated or built in the last decade, use 90% open source to power their stuff. So everybody from governments to for-profit corporations to you know tiny little startups that are being built, everybody immediately picks and chooses the highest quality open source to build whatever they want,” said Nadh.

Nadh’s background as an engineer and hobbyist programmer (who is even to this day) has not only driven open source initiatives at Zerodha but also contributed to building and encouraging this ecosystem in India

Open Source for India

FOSS (Free and Open Source) United Foundation, a non-profit organisation where Nadh serves as a founding director, was set up in 2020, to foster the open-source ecosystem in India.

Source: FOSSUnited

“Four years ago in India, we had this huge startup boom. 100% of the startups were built on top of open source technologies one way or another, but you see very little open source innovation coming out of India,” said Nadh, attributing the reason for starting such a community in India.

“We have one of the highest concentration of engineers and software developers in the world, but, if you look at the number of open source projects that come out of India we would rank really really really low, and that is so disproportionate. It is not just sad, it’s also scary,” he said.

“Of the tens of thousands of AI-related breakthroughs in the last 12 months, 98% are from hobbyists and open source communities. That’s the beauty of open source,” said Nadh. However, Nadh pointed out the not-so-favourable environment the industry has built for the community.

“There are skilled engineers, and there are people making all kinds of amazing things. It’s not coming out because there’s no environment that is conducive. So, I would blame the industry as the biggest culprit in us just consuming and consuming and not producing culturally,” he said.

Nadh believes that the culture of innovation transforms when students enter the professional field, where young kids who are spirited coders, who innovate and participate in open source activities and hackathons in colleges, stop contributing when they join a startup or a tech company.

“One of the biggest reasons is that our industry as a whole has not really encouraged engineers to participate and give back. If your company does not have an open source culture where it appreciates using but also strives to give back, then you can’t really produce.”

Nadh also spoke about how the stack has evolved over time, with several dozen high quality free and open source software powering the entire stack. He even highlighted the wide usage of open source software by all companies.

“Most companies that have been incorporated or built in the last decade, use 90% open source to power their stuff. So everybody from governments to for-profit corporations to you know tiny little startups that are being built, everybody immediately picks and chooses the highest quality open source to build whatever they want,” said Nadh.

The post Zerodha CTO Warns Companies to Not Look at AI as a Solution Chasing a Problem appeared first on Analytics India Magazine.

Losing control of your company’s data? You’re not alone

Losing control of your company’s data? You’re not alone

Photo by Hermann Traub on Pixabay

Losing control of your company’s data? You’re not alone

To survive and thrive, data likes to be richly and logically connected across a single, virtually contiguous, N-dimensionally extensible space. In this sense, data is ecosystemic and highly interdependent, just as the elements of the natural world are. Rich connectivity at scale is why graph databases and decentralized webs such as the Interplanetary File System make sense.

The more well-connected data is and the more readily and holistically it can interact and evolve as a part of an interoperable system, the more useful, resilient and reusable it becomes. That’s why siloing data and stranding description logic in compiled applications chokes data off and eventually kills it, even though it may be high quality and useful to begin with.

Stratos Kontopoulos of Foodpairing AI in an April 2024 LinkedIn post made a key point about the value of interoperability at the data layer. In the post, he observed that the avocado, orange and pineapple carbon and water footprint data in Wikidata is updated. As a result, it refreshes the Foodpairing W3C standards-based knowledge graph. Those standards simplify such integrations, allowing scalable interoperability.

How companies have been losing data control

Most companies who’ve been around for fifty years or more have been losing control of data at an accelerating pace ever since the advent of the personal computer because of more and more data fragmentation, generation by generation. They lose control whenever they place more of their trust in incumbent application providers that include the largest software companies on the planet.

The more they sign up for new applications, the more control these companies lose of what should ideally be a single, interacting, unitary asset across the data layer. The more they migrate to public clouds while holding on to an application-centric, fragmentary data view of IT, the more control they lose, and the harder data integration becomes.

As Dave McComb points out, each application created using this old development paradigm generates its own data model, creating yet another integration task. By contrast, a semantic standards-based knowledge graph-driven development enforces a single shared data model across applications.

It doesn’t help that the C-suite has historically been passive and myopic when it comes to data. The 80/20 rule applies to companies and to the C-suites that run them.

The >80 percenters (doomed to mediocrity or worse) accept bad advice when it comes to automation and run with it. Years later, when revisiting a decision that resulted in a clear failure, these companies neglect to get to the root cause of the problem, which means they keep making the same mistakes. The <20 percenters (those who succeed), by contrast, learn from their mistakes.

It’s very hard to disabuse the >80 percenters of the notion that subscribing to an app can solve a problem. The point of an app, they may not have realized, is actionable data. If the app instead is walling data off and making it less accessible, it defeats the purpose of data-driven decision making.

I’m still hopeful that the <20 percenters will come around to changing their development paradigms so that they can put organic, interoperable, contiguous data first.

Data loss in the 2000s

Nicolas Carr wrote a bestselling book called IT Doesn’t Matter, an expanded version of a 2003 Harvard Business Review article that basically recommended that big companies outside the tech sector should hand off IT responsibilities to others. In the HBR article, Carr said, “You only gain an edge over rivals by having or doing something that they can’t have or do.”

Therefore, Carr reasoned, because IT had become a commodity and was not core to most businesses, those businesses should outsource IT entirely. Don’t bother understanding IT, he figured; others will take care of it.

Carr couldn’t have been more wrong. The only way enterprises can save themselves is by being agressively proactive and protective of the data layer they’re responsible for. The only way to protect their data layer scalably is by enforcing a unitary, extensible data model.

Why the loss of data control? Faulty, dated paradigms still in place

Outsourcing 2000s style a la Carr’s vision turned out to be problematic. Then the advent of the iPhone and Android spawned a new wave of mobile apps, so becoming entirely hands off didn’t happen. Companies outside the tech sector instead reinvested in internal development teams, or contracted with third party developers for mobile apps.

Mobile wasn’t any better than desktop in terms of its development paradigm. It perpetuated the same flawed application development paradigm in terms of siloed data by default and trapped logic inside the compiled application that instead needed to live with and be updated with the data. The result was even more fragmentation, because so many new mobile apps appeared.

Every time companies spend more money contributing to the same old status quo application-first, data-takes-a-back-seat development paradigm, they will lose more control. That’s uncontrolled silo sprawl and resulting loss of data control by definition.

This data control loss phenomenon isn’t limited to traditional applications, and it’s not just due to the growing decentralization of computing. Delegation to third parties has become much easier, for instance. Today’s workflows are replete with leased application functionality from third parties, which increases the magnitude of risk that companies expose themselves to, among other concerns. Every time a company adds a partner actively managing a piece of its workflow, the company increases the risk footprint it has to manage.

AI trends are also having their own considerable accelerating impact, from a risk perspective and otherwise. For example, the quirky, horrendously inefficient data lifecycle management habits and inorganic, lossy data development paradigms associated with most machine learning also undermines organizational control of data.

A major problem of organizations adopting generative AI right now is that they’re unwittingly and uncontrollably sharing data with GAI third parties, data they might think twice about sharing if they understood more about where it actually ends up.

How more data loss happens in traditional and AI apps

Organizations used an average of 130 software-as-a-service (SaaS) apps in 2022, up from 80 in 2020, according to Better Cloud’s 2023 State of SaaSOps Report. Assuming a 15 percent annual average growth rate for the SaaS market in 2023 and 2024 (Statista’s estimate), the average organization may now be using close to 150 SaaS apps, not considering shadow applications billed to individual employee credit cards the organization hasn’t been able to track.

Large enterprises with over $100 billion in annual revenue who haven’t implemented strict procurement controls may each be subscribing to thousands of different SaaS apps. Most of those apps will likely be underutilized, with only a subset of potential users making effective use of them.

Traditionally, each application whether SaaS or not is tightly coupled to its own database with its own data model, or at least the data model of the provider. If you’re using SAP, you may be using Integration Suite. If you’re using Adobe, perhaps it’s Creative Cloud.

Large enterprises may be using dozens of different third-party integration platforms. When they hire a new staffer responsible for integration, that staffer may specify their own integration platform preference. Most staffers aren’t thinking of a larger, system-wide integration challenge. They’re just trying to solve the problem that immediately concerns them. The result is a proliferation of integration tools in addition to other SaaS apps. Procurement departments may not be aware this proliferation is happening, or why.

To harness the full potential of a traditional app’s data requires specific, system-level integration effort. Even if you’re using a capable integration platform, you’ve got to deal with the fact that the data the app creates tends to be siloed by default. To integrate requires a separate effort, and when you’re done, it may be only application layer integration, which is far less flexible and capable than the best data layer integration methods.

The implication here is that for established organizations, in 2024, what dataware company Cinchy calls an “integration tax” could be growing at 15 percent or more every year when it comes to typical SaaS apps. The integration tax for AI apps must be substantially higher, considering the manual effort evident to bring all of these into the fold in a data or lake warehouse, for example.

The challenge of managing copied data in warehouses is another related issue. The challenge with copied data has spawned a zero-copy integration standard. (See https://www.techtarget.com/searchenterpriseai/tip/The-role-of-trusted-data-in-building-reliable-effective-AI for more information.)

Interoperable ecosystems, not just data sprawl that fills storage and processing clouds

The dominant development paradigms in use today silo data and strand code by default. What’s the alternative? A development paradigm that promotes a unitary, organic, interacting data ecosystem such as the one Dave McComb discusses in his books.

If the <20 percenters push for such a paradigm by starting with semantically (meaningfully, logically) connected data in knowledge graphs, the >80 percenters and their vendors will follow–because they’re followers anyway. But the overarching theme here is that companies have to be hands on, focusing on making the data layer manageable and reusable. Data, after all, is what determines the shape of AI models, and core internal data is what companies run on.

Photo-sharing community EyeEm will license users’ photos to train AI if they don’t delete them

Photo-sharing community EyeEm will license users’ photos to train AI if they don’t delete them Sarah Perez @sarahintampa / 8 hours

EyeEm, the Berlin-based photo-sharing community that exited last year to Spanish company Freepik after going bankrupt, is now licensing its users’ photos to train AI models. Earlier this month, the company informed users via email that it was adding a new clause to its Terms & Conditions that would grant it the rights to upload users’ content to “train, develop, and improve software, algorithms, and machine-learning models.” Users were given 30 days to opt out by removing all their content from EyeEm’s platform. Otherwise, they were consenting to this use case for their work.

At the time of its 2023 acquisition, EyeEm’s photo library included 160 million images and nearly 150,000 users. The company said it would merge its community with Freepik’s over time.

Once thought of as a possible challenger to Instagram — or at least “Europe’s Instagram” — EyeEm had dwindled to a staff of three before selling to Freepik, TechCrunch’s Ingrid Lunden previously reported. Joaquin Cuenca Abela, CEO of Freepik, hinted at the company’s possible plans for EyeEm, saying it would explore how to bring more AI into the equation for creators on the platform.

As it turns out, that meant selling their work to train AI models.

Now, EyeEm’s updated Terms & Conditions reads as follows:

8.1 Grant of Rights – EyeEm Community

By uploading Content to EyeEm Community, you grant us regarding your Content the non-exclusive, worldwide, transferable and sublicensable right to reproduce, distribute, publicly display, transform, adapt, make derivative works of, communicate to the public and/or promote such Content.

This specifically includes the sublicensable and transferable right to use your Content for the training, development and improvement of software, algorithms and machine learning models. In case you do not agree to this, you should not add your Content to EyeEm Community.

The rights granted in this section 8.1 regarding your Content remains valid until complete deletion from EyeEm Community and partner platforms according to section 13. You can request the deletion of your Content at any time. The conditions for this can be found in section 13.

Section 13 details a complicated process for deletions that begins with first deleting photos directly — which would not impact content that had been previously shared to EyeEm Magazine or social media, the company notes. To delete content from the EyeEm Market (where photographers sold their photos) or other content platforms, users would have to submit a request to support@eyeem.com and provide the Content ID numbers for those photos they wanted to delete and whether it should be removed from their account, as well, or the EyeEm market only.

Of note, the notice says that these deletions from EyeEm market and partner platforms could take up to 180 days. Yes, that’s right: Requested deletions take up to 180 days but users only have 30 days to opt out. That means the only option is manually deleting photos one by one.

Worse still, the company adds that:

You hereby acknowledge and agree that your authorization for EyeEm to market and license your Content according to sections 8 and 10 will remain valid until the Content is deleted from EyeEm and all partner platforms within the time frame indicated above. All license agreements entered into before complete deletion and the rights of use granted thereby remain unaffected by the request for deletion or the deletion.

Section 8 is where licensing rights to train AI are detailed. In Section 10, EyeEm informs users they will forgo their right to any payouts for their work if they delete their account — something users may think to do to avoid having their data fed to AI models. Gotcha!

EyeEm’s move is an example of how AI models are being trained on the back of users’ content, sometimes without their explicit consent. Though EyeEm did offer an opt-out procedure of sorts, any photographer who missed the announcement would have lost the right to dictate how their photos were to be used going forward. Given that EyeEm’s status as a popular Instagram alternative had significantly declined over the years, many photographers may have forgotten they had ever used it in the first place. They certainly may have ignored the email, if it wasn’t already in a spam folder somewhere.

Those who did notice the changes were upset they were only given a 30-day notice and no options to bulk delete their contributions, making it more painful to opt out.

Has anyone figured out a way to batch delete their photos from #EyeEm. I got this email yesterday. While I only have 60 photos there, I'd prefer not to feed the training data beast for free… pic.twitter.com/lUuDR5BnGb

— Powen Shiah @polexa@tech.lgbt (@polexa) April 5, 2024

Suggest existing @EyeEm users run away fast. They've sneaked in this destructive rights grab as an opt out: "These rights now include the sublicensable and transferable right to use your Content to train, develop, and improve software, algorithms, and machine-learning models."

— Joel Goodman (@pixel8foto) April 3, 2024

Requests for comment sent to EyeEm weren’t immediately confirmed, but given this countdown had a 30-day deadline, we’ve opted to publish before hearing back.

This sort of dishonest behavior is why users today are considering a move to the open social web. The federated platform, Pixelfed, which runs on the same ActivityPub protocol that powers Mastodon, is capitalizing on the EyeEm situation to attract users.

In a post on its official account, Pixelfed announced “We will never use your images to help train AI models. Privacy First, Pixels Forever.”

EyeEm, the photo marketplace, changes hands as Freepik picks it up out of bankruptcy