Open Source is a Good Start for India

Open Source is a Good Start for India

Cropin’s Aksara AI model is a perfect example of how you can build a solution on top of open-source models. Aksara is a micro language model built on top of Mistral-7B-v0.1, aiming to democratise agricultural knowledge to empower farmers.

There are other models like OpenHathi and Tamil LLaMA that are built on open-source models trying to break the language barrier by handing over AI to a general audience in their own language.

Sure, there are initiatives and companies that are building LLMs from scratch in India, like Mahindra’s Project Indus, Sarvam AI, and Krutrim AI, but they have yet to be released to the general public, and for now, open-source LLMs are the only way forward.

As Nandan Nilekani rightly pointed out, India’s focus should be on using AI to make a difference in people’s lives. “We are not in the arms race to build the next LLM, let people with capital, let people who want to pedal ships do all that stuff… We are here to make a difference, and our aim is to put this technology in the hands of people,” Nilekani said.

Multiple Languages? Open Source is Here to Help

Apart from cost and other resources, having 22 official languages and hundreds of dialects is a major challenge in building an AI model for India. Here’s where the core features of open source come into play.

To solve this issue, India can use MoE (Mixture of Experts) to blend available language-specific models like Tamil LLaMA and Kannada LLaMA to create one multilingual model running on minimal resources, solving the language barrier problem.

Also, it is quite easy to train your model when you have an existing one in a neighbouring language. For example, if you want to train a model in the Avadhi language and you have available LLMs for Hindi, then taking it forward for Avadhi will be quite easy compared to building it from scratch.

Open-source LLMs like BLOOM and IndicBERT, which are already pre-trained in multiple Indian languages, are a perfect example of how easy it will be to jumpstart the development of multilingual LLMs.

Initiatives like Core ML from Wadhwani AI are supposedly working on creating reusable libraries and open-source their data and code so that their efforts can be reused for further development.

Another good example is how Google/Flan-T5-XXL was used for legal text analysis, specifically focusing on the Indian Constitution. This is yet another direct indication of how open source is helping Indian citizens.

Costs are reduced drastically

Training a large model like GPT-3 from scratch is estimated to cost anywhere from $4 to $10 million or more, and some models are on par or better than GPT-3 for free. For any developing country like India, it will make sense to use such open-source models rather than spending millions (or billions) on training in 22 languages.

Research shows that data scientists spend almost 50% of their time cleaning their data. This becomes an even worse problem when you deal with multiple Indian languages and dialects with their own quirks, taking into account sarcasm, ambiguity, and irony.

Opting for an open-source model with pre-trained data saves a lot of time to build something helpful around it. When you want to build something around open-source LLMs, you have the advantage of transfer learning, where you use data captured by pre-trained models through training on large datasets that can be transferred to new tasks. This can help a lot of new Indian AI startups that don’t have enough resources to train data from scratch.

For India, building AI from scratch while using open-source LLMs parallelly makes sense, as you are already leveraging AI to solve problems. However, building LLMs from scratch can also brighten the Indian AI ecosystem.

When you work with an open-source model, it is already pre-trained, and there’s flexibility in training it further in a specific language and dialect. Furthermore, users worldwide can contribute to your project with datasets that never made it to your list, making it way more robust than a closed-source model.

The post Open Source is a Good Start for India appeared first on AIM.

Yann LeCun Delays Elon Musk’s AGI Plans 

xAI chief Elon Musk and Meta’s chief AI scientist Yann LeCun, recently engaged in friendly banter on X, and it looks like there is no end in sight. And LeCun is clearly winning the argument, or more accurately, is successfully derailing Musk’s plans to build a new supercomputer with 100,000 NVIDIA GPUs and achieve AGI next year.

Interestingly, the drama is unfolding on X rather than on Meta’s Threads, which was launched about a year ago. Meta and xAI are competitors, with both social media platforms building open-source LLMs that can provide real-time information.

Not sure if this is a deliberate move by Mark Zuckerburg or LeCun – who understands LLMs like no other – to pollute X with pessimist views, so Grok fails. “My relationship with X/Twitter is love/hate,” quipped LeCunn, further fuelling speculation.

This development comes in the backdrop of xAI announcing a Series B funding round of $6 billion to expand its team, making xAI the second-most valuable AI startup at a $24 billion valuation, outperformed only by OpenAI, which is at an $86 billion valuation.

It is also impressive to see it surpassed Anthropic in less than a year, which now stands as the third-most valuable AI startup at an $18 billion valuation.

Advises People to Not Join xAI

LeCun believes that LLMs that power generative AI products such as ChatGPT will never achieve the ability to reason and plan like humans, or achieve AGI. LeCun is of the opinion that animals are more intelligent than AI.

“General intelligence, artificial or natural, does not exist. Cats, dogs, humans and all animals have specialised intelligence,” said LeCun recently.

The banter between LeCun and Musk started after the latter invited people to join xAI’s mission.

“Join xAI if you can stand a boss who claims that what you are working on will be solved next year (no pressure),” responded LeCun, advising interested candidates against joining Musk’s company.

Further, he said that he likes Musk’s cars, rockets, solar panels, and satellite network but dislikes his vengeful politics, conspiracy theories, and hype.

LeCun believes he is politically correct because he is a “scientist, not a business or product person” unlike Musk.

When Musk questioned his contribution, as to how much research he conducted “in the last five years,” LeCun candidly replied saying, “Over 80 technical papers published since January 2022.”

A few days ago, LeCun clarified that FAIR has roughly 500 scientists and engineers, and he doesn’t run FAIR; Joelle Pineau does. “In fact, I don’t run anything, I’m nobody’s boss. I’m the Chief AI Scientist: I provide ideas and advice to teams,” he said.

xAI vs Meta

Musk recently told his investors that his goal is to have the supercomputer operational by fall 2025. Once completed, this supercomputer — comprising NVIDIA’s flagship H100 GPUs — will be at least four times larger than the biggest existing GPU clusters, including those built by Meta Platforms for training AI models, he told investors.

Currently, xAI is reportedly set to spend $10 billion on Oracle Cloud servers.

In an interview with Norway Wealth Fund CEO Nicolai Tangen on X Spaces, Musk revealed that training the Grok 2 model requires approximately 20,000 Nvidia H100 GPUs. He added that training the Grok 3 model and future versions will necessitate 100,000 Nvidia H100 GPUs.

In April, xAI introduced Grok-1.5V, a first-generation multimodal model. In addition to its strong text capabilities, Grok can process a wide variety of visual information, including documents, diagrams, charts, screenshots, and photographs.

Meanwhile, Musk’s Tesla, now headquartered in Austin, is also developing a Dojo supercomputer. NVIDIA provided 35,000 H100 GPUs to Tesla, paving the way for the breakthrough performance of FSD Version 12, their latest autonomous driving software based on Vision.

As far as Meta is concerned, LeCun recently confirmed that the company has obtained $30 billion worth of NVIDIA GPUs to train their AI models. Enough to run a small nation or even put a man on the moon in 1969.

Earlier this year, Zuckerberg announced that Meta is building massive compute infrastructure to train Llama 3 and will acquire 350,000 H100s by the end of this year, aiming for a total of almost 600,000 H100s worth of compute.

During the latest NVIDIA earnings call, the company said that the big highlight of the quarter was Meta’s announcement of Llama 3, their latest LLM, which was trained on a cluster of 24,000 H100 GPUs.

Llama 3 powers Meta AI, a new AI assistant available on Facebook, Instagram, WhatsApp, and Messenger. Using Meta AI, users can access real-time information from across the web without having to bounce between apps.

Interestingly, Meta AI Assistant is quite similar to xAI’s Grok, as both are generative AI applications on social media platforms and generate real-time information. Recently, Grok has faced criticism for generating hallucinated content on several occasions.

Meanwhile, Meta is yet to release the Llama 3 400B models, which are still in training. The company said that over the coming months, they will release multiple models with new capabilities, including multimodality, the ability to converse in multiple languages, a much longer context window, and stronger overall capabilities.

OpenAI is Likely to Reach AGI Before xAI and Meta

According to reports, Microsoft and OpenAI are also working on plans for a data centre project that could cost as much as $100 billion and include an AI supercomputer called Stargate, set to launch in 2028.

OpenAI chief Sam Altman recently proposed the concept of Universal Basic Compute, in which everyone would have access to a portion of GPT-7’s computing resources.

“I wonder if the future looks something more like Universal Basic Compute than Universal Basic Income, and everybody gets like a slice of GPT-7 compute,” said Altman, in the recent episode of the All-In Podcast.

Most recently, OpenAI announced that it had started training its next frontier model, GPT-5, and anticipates that it will bring the next level of capabilities on their path to achieve AGI.

Sadly, there is no convincing LeCun.

A few days ago, he took a dig at OpenAI, sarcastically saying: “Come work at ClosedAI. With AGI just around the corner, your shares will be worth 42 sextillionnollars. We can claw back your vested shares if you quit, unless you sign a non-disparagement agreement. Oh wait, sorry, we didn’t realise our contract was this harsh until someone published an article on it. You can keep them.”

The post Yann LeCun Delays Elon Musk’s AGI Plans appeared first on AIM.

Zscaler CEO Denies Broadcom Acquisition Rumours

Zscaler CEO, Chairman and Founder Jay Chaudhry firmly denied rumors that the cloud security company is entertaining acquisition offers from Broadcom, in a LinkedIn post on Wednesday.

“I want to set the record straight that neither I nor the Zscaler board of directors are seeking or entertaining any offers to acquire Zscaler. Any reports stating otherwise are untrue,” Chaudhry wrote. The rumours appear to have originated from a week-old anonymous Substack post claiming Broadcom had made a $38 billion offer for Zscaler.

Chaudhry emphasised Zscaler’s strong position in the rapidly growing zero-trust security market. “Zscaler is at the forefront of a major technology disruption, and I believe that we are in an excellent position to lead the future of zero trust security,” he said.

The CEO directed investors to Zscaler’s official channels, including its website, investor relations site, blog, press releases, SEC filings, and social media, for any material information about the company going forward.

Zscaler, founded in 2007, is a leading provider of cloud-based security solutions. Its Zero Trust Exchange platform helps enterprises securely connect users, devices and applications using zero trust principles. In recent months, Zscaler has made strategic acquisitions to expand its zero trust capabilities, including Airgap Networks for agentless network segmentation and Avalor for AI-powered data protection.

The company reported revenues of $525 million and a net loss of $28.5 million in its most recent quarter. It is slated to announce Q3 fiscal 2024 earnings on May 30. Zscaler’s stock price was unaffected in after-hours trading following Chaudhry’s statement, as markets were closed for the Memorial Day holiday.

The post Zscaler CEO Denies Broadcom Acquisition Rumours appeared first on AIM.

Observability Tools can Now Monitor LLMs Along with DevOps Environments

For cloud-native businesses, observability platforms are essential for gaining insights into application performance. DevOps practices thrive on the foundation of observability, empowering teams to iterate rapidly and deliver high-quality applications.

The Indian startup ecosystem is rich with cloud-native businesses. New Relic, one of India’s leading observability platforms, serves close to 12,000 customers.

“I’ve visited India multiple times, and the fact that you can get things delivered in under 10 minutes is unparalleled. Trust me, I’ve been to many places, and this does not happen anywhere.

“Therefore, the demand for the applications and performance skyrockets here. A lot of these companies – the Swiggies and Ola Cabs of the world – use our observability platform,” Ashan Willy, CEO at New Relic, told AIM in an exclusive interview.

New Relic, with over 80,000 customers worldwide, caters to startups as well as large enterprises. Willy believes AI will bring a transformational change in the observability space. “I think at every inflexion point, observability took a step forward, and it will be the same with AI,” he said.

LLM Observability is here

New Relic’s comprehensive platform offers over 30 capabilities, delivering a seamless and connected experience throughout the various layers of the technology stack at every stage of the software lifecycle.

The company is now adding generative AI to the mix.

Last year, the company introduced New Relic Grok, which, according to them, is the world’s first generative AI assistant designed specifically for observability.

Willy believes that most companies today leverage LLMs in some way, and hence, “Observability offers the chance to monitor applications comprehensively, providing insights into both the application’s end-to-end performance as well as the LLMs.”

For cloud-native businesses, LLMs are often integrated with their existing operations. New Relic AI monitors AI-specific metrics like token usage, costs, and response quality, as well as integrates with traditional application performance monitoring.

Other players in the space, such as DataDog and Dynatrace, have also introduced LLM observability solutions as part of their platforms.

Moreover, generative AI will enhance predictive analysis and reduce false positives, leading to more accurate and actionable insights.

“One common challenge with AI/ML is false positives, leading to unwanted alerts and actions based on inaccurate information. However, with generative AI providing contextual insights, predictive analysis becomes more feasible.

“Today, nobody can drive responses on top of the AI information yet, but that’s going to change.”

AI is Democratising Observability

New Relic Grok allows users to ask questions about their systems in natural language and get insights. Willy believes this will democratise observability in a big way.

“Why shouldn’t you have more of a Google-like UI where you come in and ask a question in plain English rather than learning some sort of crazy language, right?

“So I think the first thing it does is democratise observability. Under the New Relic AI umbrella, we released something called Grok, an AI assistant before [Elon] Musk stole the name,” Willy laughed.

Moreover, AI also presents a huge opportunity for companies in this space to grow. “If there are roughly 30 million developers in the world, only about 2 million of them deal with observability on a daily basis. This presents us with a big opportunity,” he said.

“How is Jensen Huang going to make his next trillion dollars? It’s not by selling chips to the data centre, the hyperscalers, because they are going to white label that. So, he’s got to move that stuff to the edge. You want to move it close to the edge, which means there’s more things to observe,” Willy pointed out.

The Industry is Asking for a Consumption Pricing Model

In recent years, as cloud costs rose, so did the expenses associated with observability, presenting a significant challenge for numerous cloud-native enterprises.

To address this, New Relic has introduced a price consumption or compute model, which means customers pay only for what they consume.

“We’ve introduced things like telemetry data, pipelines, all of that, and cost management tools. I think that’s a big thing. So how do I observe everything I have in a way that makes it effective for me?

“Hence, we have introduced the consumption-based pricing model which customers love, but the investors, not so much. But I think over time, if customers like it, investors will come to like it as well.”

Furthermore, Willy also points out that New Relic is the only company in the core observability space providing a consumption-based pricing model.

New Relic is Embracing Open Source

The domain of open source continues to hold sway, retaining relevance in various domains in contemporary times, including observability.

New Relic, too, is embracing open source. “A lot of our customers are asking us to embrace open source and open telemetry, which is a portion of open source that is very relevant for the observability space. We’re embracing that in a big way,” Willy said.

Modern cloud-native applications are distributed, making the capture and export of telemetry data complicated. Open Telemetry standardises telemetry data generation, collection, and exportation.

With vendor-neutral APIs, SDKs, and tools, this open-source framework enables organisations to instrument applications universally, promoting data delivery to diverse observability backends.

Willy’s statement does hold true as Relic is one of the major contributors to the open telemetry project. The company has open-sourced many of its agents, integrations and SDKs under open source licences.

The post Observability Tools can Now Monitor LLMs Along with DevOps Environments appeared first on AIM.

QX Lab AI Launches Hybrid Generative AI Multimodal Platform Ask QX PRO

Dubai-based startup QX Lab AI has announced the launch of Ask QX PRO, an advanced version of its Generative AI platform Ask QX.

Ask QX PRO introduces multimodal features like text-to-image, image-to-text, document analysis, and text-to-code, alongside its text-to-text capabilities, incorporating a wider array of functionalities and requirements.

Since its launch earlier this year, Ask QX has garnered over 15 million users, with 4 million of them engaging with the platform on a regular basis, the startup said.

QX Lab AI is based on a foundational model, which means that the platform trains on its datasets and has its own API. This makes QX Lab AI one of the six foundational models in the world, along with OpenAI, Gemini, Anthropic, and Cohere, among others.

Ask QX PRO has an improved version of the previous model’s text-to-text feature. Additionally, its text-to-image feature is capable of generating clear images from textual input, while the image-to-text feature offers users the best source of detailed textual information derived from images.

The platform’s text-to-code feature makes it possible to transform prose-type descriptions into coded snippets, while the code editing feature allows the user to edit the code base, add new codes, and even rectify any wrong code with real-time suggestions & corrections.

Furthermore, the document analyser feature allows users to browse and search documents, scan for important data, and extract information from it. These improvements give Ask QX PRO a universal value and reliability for different practical uses.

For its multimodal architecture tailored for B2B industries, Ask QX PRO has introduced two pioneering technologies: the Advanced Multimodal Synthesis System (AMSS) and the Dynamic Integration and Synthesis Matrix (DISM).

AMSS employs a state-of-the-art transformer-based framework, trained on 20 trillion tokens and featuring a complex architecture of multiple small expert models, enabling robust data integration and facilitating intricate interactions among varied data types such as text and images—ideal for complex B2B applications.

This integration is enhanced through sophisticated manifold alignment and optimized Bayesian integration, projecting inputs into a unified, comprehensive framework. DISM utilizes advanced cross-modal attention mechanisms alongside recursive tensor decomposition to seamlessly synthesise and integrate multimodal data from multiple industry sources.

With up to 665 billion parameters, it ensures precise, adaptive outputs that respond dynamically to evolving business environments while incorporating differential privacy algorithms to maintain stringent data integrity and security, supporting critical B2B applications across various sectors.

Ask QX PRO has been designed to support more than 120 languages. This includes languages like English, Welsh, Scottish Gaelic, Irish (Gaelic), and Cornish, and it is done with the intention of democratising access to Generative AI by catering to users from every region within this vast market.

The platform inferencing ensures that outputs are relevant and resonant for all users. Users will get free access to the platform, and a premium model will be introduced by the company in mid-June this year.

Keeping data privacy and security as the utmost priority, QX Lab AI will ensure local storage of user data is in compliance with relevant local laws. The company will work together with its infrastructure partners to safeguard user data privacy and security, adhering to the highest standards of data security applicable in the UK and the EU. For now, the Ask QX PRO application will be accessible to Android users, with the iOS version expected to be launched soon.

“The launch of our multimodal platform is a testament to the tremendous response we have received for Ask QX. We believe that Europe is a crucial market for any technology company like ours, especially given the large number of tech-savvy populations ready to embrace Generative AI for personal and professional use. We recognise the importance of strategic partnerships to grow our platform and reach users across every corner of the continent,” Mr Arjun Prasad, co-founder & chief strategy officer (CSO) of QX Lab AI, said.

The post QX Lab AI Launches Hybrid Generative AI Multimodal Platform Ask QX PRO appeared first on AIM.

Data management demand in digital twin-oriented intensive care units

Data management demand in digital twin-oriented intensive care units
U.S. Army photo by Spc. Nicholas Goodman. Original public domain image from Flickr via Rawpixel.com.

One of the more telling exhibits at April’s NTT Upgrade 2024 event in San Francisco was the Autonomous Closed-loop Intervention System (ACIS) in development at NTT Research’s Medical & Health Informatics Lab (MEI). The ACIS concept underscores how hospitals can embrace a systems-level automation initiative starting with the intensive care unit to create a patient-centric, personalized, continually updated data management environment that can radically boost patient outcomes.

The situation in a typical ICU

A major part of an intensive care unit’s work in a hospital involves stabilizing patients in serious or critical condition. Stabilization is the process of ensuring that vital signs such as rate of breathing, blood oxygen levels, pulse, blood pressure and temperature stop trending in the wrong direction and instead return to normal levels as soon as possible.

ICUs can struggle or fail to stabilize a patient for a wide range of reasons. From a data perspective, doctors, nurses and the support systems they depend on need the right information at the right time and place to optimize the level of care.

As you might imagine, ICUs tend to lack quite a bit of information about new patients just arriving in the ICU. To determine the best treatments requires interoperability at the data layer between sources. Some information is patient-specific, while other information is cohort-specific or more general in terms of a given condition the patient suffers from.

Detailed information about treatments is equally important in this scenario. The MEI Lab at NTT Research uses a digital twins metaphor to describe how feedback loops and interoperable data models can continually improve patient, condition and treatment data.

The more hospital staffs and their systems lack insights about a patient and the specifics of how they respond to specific treatment, the more they’re forced to guess when making treatment decisions. The more guesswork, trial and error to meet the need, the longer it takes to stabilize a patient.

Automated, personalized medicine in the era of digital twins

“If we know enough about a patient, if we have a digital representation of that patient, then we can predict how individual drugs will affect that patient,” says Jon Peterson, distinguished research scientist at the MEI Lab. “Knowing those predictions, we can put together a system that will allow us to very rapidly deliver the appropriate drugs to a patient.”

The digital twin approach reflects an understanding of the challenges ICU staffs have when it comes to responding quickly and appropriately with the right amounts of the right drugs, given circumstances that change quickly and patients that have individual needs. ACIS is designed to harness three kinds of evolving, digital twin-oriented data sources:

  • Cardiovascular information from the patient
  • Cardiovascular information from the patient population at large
  • A drug library

The system then monitors the patient’s response to the treatment and adjusts dosages accordingly. This feedback loop updates the digital twin of the patient. You could imagine that hospitals could also contribute anonymized information back to the drug pipeline providers in order to update drug library information.

What ACIS implies in terms of organization-wide data requirements

I’ve written before in these pages about the best practice of knowledge graph-based data management when creating and evolving digital twins. (See the IOTICS Portsmouth Ports use case at https://www.datasciencecentral.com/preconditions-for-decoupled-and-decentralized-data-centric-systems/, for example.) Portsmouth Ports now monitors and shares timely metrics on emissions updates to a broad range of industry users. Armed with this information, shipping companies, for example, can determine which vessels, when and where, are out of compliance with emissions control requirements. This level of temporal + spatial detail is essential to bring various transportation types and the ports themselves into compliance with UK regulations.)

Also relevant is the Montefiore Health Patient-Centered Analytics Learning Machine (PALM) in the Bronx,. (See https://www.datasciencecentral.com/beyond-data-science-a-knowledge-foundation-for-the-ai-ready-enterprise/ for more information.) PALM brings together many critical, but disparate external and internal sources so that machine learning and advanced analytics methods can be run that benefit from the entire connected, continually updated whole. The knowledge graph-based PALM as a result can predict and prevent specific occurrences of sepsis and respiratory failure, for instance.

Knowledge graphs, digital twins and other top trending AI technologies in 2024

In closing, here are three facts worth noting when it comes to trending technologies that need to be harnessed together in next-generation systems like ACIS:

  • Gartner in April 2024 released its Impact Radar. In the bullseye of its Impact Radar target are two technologies: knowledge graphs and generative AI. These two technologies can and should be designed to be symbiotic.
  • Without knowledge graphs, GAI will continue to hallucinate, an issue semantic graph database provider Stardog explores at length in its 19-question May 2024 FAQ “19 Questions about LLM Hallucinations.” In the discussion of Number 18, Stardog points out: “…databases matter because structured data and data records matter. In fact, one under-appreciated fact of retrieval augmented generation’s (RAG’s) inadequacy is that it doesn’t work very well with structured data….”
  • Gartner in another April 2024 article, “When Not to Use Generative AI,” emphasizes the need for a system-level perspective of the kind that ACIS encourages for organizations to use AI appropriately. “Generative AI is only one piece of the much broader AI landscape, and most business problems require a combination of different AI techniques. Ignore this fact, and you risk overestimating the impacts of GenAI and implementing the technology for use cases where it will not deliver the intended results.”

I tested this $700 AI device that can translate 40 languages in real time

Timekettle X1 AI Interpreter Hub

ZDNET's key takeaways

  • The Timekettle X1 Interpreter Hub is a translation device available for $700.
  • The X1 Interpreter Hub has a screen and earbuds that charge when stored inside the device. Thanks to AI, it's very effective at translating and has different modes for one or two wearers per device.
  • Though generally effective, the Timekettle X1 Interpreter Hub requires users to speak clearly near the device, and isn't very accurate when people speak too fast. We also can't look past the steep $700 price tag.

As a fan of artificial intelligence (AI) tools, I jump at the chance to test new, innovative applications of the technology.

That's why I decided to try out the Timekettle X1 Interpreter Hub — especially as a bilingual person.

The Timekettle X1 Interpreter Hub looks sleek and feels futuristic. It's packaged beautifully: The box contains a Timekettle (which does the translating for you), two earbuds that are stored and charged inside the Timekettle, ear hooks and tips for the earbuds, a USB-C charging cable, and instructions.

View at Amazon

After a good charge, I turned on the Timekettle for initial testing. This is a standalone device, meaning you can translate sounds around you, like another person talking or a movie on the TV, provided it's loud enough. However, it can also handle two-way translation when each person wears an earbud. This lets you speak to a person in one language and have them hear the translation in their preferred language in their earbud, and vice versa.

Furthermore, several Timekettle users can hold multilingual meetings and have up to 20 people speaking up to five languages in one place, provided each person has their own device.

The Timekettle also allows remote voice calls between two devices, as long as each is connected to Wi-Fi at the time. During these calls, each user can speak their own language and have the devices translate for the listener.

Also: My favorite XR glasses for productivity and traveling just got 3 major upgrades

I tested these functionalities and found that the Timekettle was equally effective in each instance. That is to say, it was mostly accurate, but still made mistakes, regardless of the conversation method.

As a member of a bilingual family, I tested the Timekettle X1 Interpreter Hub with my husband. We used English and Spanish one-on-one, with each of us wearing an earbud. I also tested listen-and-play mode in different languages, where one user wears both earbuds, and the device listens. Finally, I tested ask-and-go mode, which lets you speak into the Timekettle, and it displays the translation.

My intermediate proficiency in French helped me test the Timekettle in that language. I also used the listen-and-play mode with Korean, German, French, Spanish, and Russian with Netflix content, using subtitles to confirm the accuracy of the general gist.

The Timekettle X1 was accurate when using deliberately clear speech, but accuracy diminished when people spoke too fast or used regional vernacular. When online, the device can understand 93 accents in the 40 languages in its repertoire. Offline, the X1 offers 13 language pairs. The inaccurate translations were still generally understandable most of the time — though not always.

Also: Generative AI may be creating more work than it saves

I liked that the Timekettle has a clear LCD screen that displays translated text for visual confirmation, which is available in different modes. The display makes navigating and choosing the preferred translation mode easy and lets you keep track of the conversation. The visual clarity also helps with language practice, which brings me to my next point.

Aside from being a great tool for conference rooms, business conversations, international travel, and remote calls, the Timekettle X1 Interpreter Hub is also useful for learning pronunciation across languages. If you're interested in learning a new language, a device like this can greatly aid in learning how to pronounce or word a phrase correctly.

ZDNET's buying advice

Is the Timekettle X1 Interpreter Hub worth its $700 price tag? Although it's a standalone device that can be a great tool for translation, the X1, in my opinion, is priced too high for the functionality it offers. I find it makes mistakes too often to justify such a steep price. It does include earbuds and packs high-end technology that is powered by AI, so it's a definite step up from other options priced between $100 and $150.

Although the earbuds are included, the device is incompatible with other earbuds and headphones. You can't use the Timekettle with your AirPods or over-the-ear headphones, so it's a good thing that the included earbuds are comfortable. This also means, however, that you're out of luck if you lose the Timekettle earbuds.

Also: When's the right time to invest in AI? 4 ways to help you decide

The Timekettle X1 Interpreter Hub works well and is useful for translating in business and personal settings. It's simply priced too high for my comfort, especially when other options, like Google Translate, are free. I could see a professional interpreter appreciating this tool in their arsenal — or a well-heeled global traveler seeking a portable but reliable translation solution.

Featured reviews

Cognitive robotics – Part one

Cognitive robotics – Part one

In this three-part series, we will explore cognitive robotics – a fascinating subject that promises to play a major role in the evolution of AI.

Cognitive robotics lies at the intersection of robotics, artificial intelligence (AI), and cognitive science, aiming to create intelligent systems that mimic human cognitive processes. This field is distinct yet closely related to intelligent robotics, emphasizing an interdisciplinary approach and bio-inspired methods.

What is Cognitive Robotics?

Cognitive robotics (CR) refers to designing robots with human-like intelligence, encompassing perception, motor control, and high-level cognition. The goal is to build physically embodied intelligent systems inspired by cognitive sciences and natural intelligent systems. This interdisciplinary effort draws from AI, cognitive science, neuroscience, biology, philosophy, psychology, and cybernetics.

Different scholars have provided varying emphasis of CR over the years:

  1. The effort to build physically embodied intelligent systems based on cognitive sciences and natural intelligent systems.
  2. Designing robots with human-like intelligence in perception, motor control, and cognition, emphasizing the need for interdisciplinary contributions.
  3. Using bio-inspired methods for designing sensorimotor, cognitive, and social capabilities in autonomous robots.

These themes highlight CR’s focus on interdisciplinary approaches and human-like, bio-inspired functions, ranging from sensorimotor skills to higher-order cognitive functions and social skills.

Approaches to Cognitive Robotics

Several subfields within cognitive robotics contribute to its diverse nature:

Neurorobotics

Neurorobotics explores the interaction between neural systems and robotic embodiments, using robots to study brain-body-environment interactions. This field aims to create autonomous systems that leverage biological intelligence, providing insights into neural circuitry and cognitive processes. For example, neurorobots can model neural networks and study how physical and sensory feedback loops affect behavior and learning. By mimicking the human nervous system, neurorobots help understand complex brain functions and contribute to developing advanced AI systems.

Developmental Robotics

Developmental robotics draws inspiration from child development, aiming to create robots that acquire skills through interactions with their environment, much like human infants. This approach integrates developmental psychology, neuroscience, and robotics, focusing on the autonomous acquisition of complex sensorimotor and cognitive abilities. Developmental robots learn and adapt in real-time, developing new skills and knowledge through exploration and social interaction. This methodology not only advances robot autonomy but also offers insights into human cognitive development.

Evolutionary Robotics

Evolutionary robotics uses principles of natural selection and genetic algorithms to evolve robot behaviors and physical forms. This approach aims to create adaptive and robust robots capable of evolving to meet new challenges and environments. Robots in this field are treated as autonomous organisms that undergo simulated evolution, developing new capabilities over generations. By evolving neural network controllers and physical structures, evolutionary robotics creates innovative and efficient solutions to complex tasks, mirroring nature’s adaptive processes.

Swarm Robotics

Swarm robotics takes inspiration from social insects, designing systems of simple robots that can cooperate to perform complex tasks. This approach emphasizes decentralized control, local interactions, and collective behaviors, leading to robust and scalable robotic systems. Swarm robots communicate through simple signals, coordinating actions to achieve collective goals without central control. Applications include search and rescue missions, environmental monitoring, and agricultural automation, where the swarm approach’s flexibility and resilience provide significant advantages.

Soft Robotics

Soft robotics focuses on creating robots with flexible, compliant bodies that can safely interact with humans and adapt to their environment. This approach often overlaps with developmental robotics, as it aims to design robots that learn and develop over time, using principles of biological growth and adaptation. Soft robots use materials like silicone and other elastomers to achieve movements and interactions that are more natural and safe for human contact. They are particularly useful in delicate tasks such as medical surgery, rehabilitation, and personal assistance, where their gentle touch and adaptability reduce the risk of injury and increase effectiveness.

Conclusion

Cognitive robotics represents a vibrant, interdisciplinary field that seeks to bridge the gap between human cognition and robotic intelligence. By drawing on diverse disciplines and emphasizing bio-inspired designs, CR aims to create robots that can perceive, reason, and act in ways that closely mimic human and animal intelligence. As the field continues to evolve, it promises to unlock new possibilities for autonomous systems and their applications in various domains, from healthcare to industrial automation.

Understanding cognitive robotics highlights its potential to revolutionize how we design and interact with intelligent systems, ultimately leading to more adaptable, capable, and human-like robots. The diverse approaches within cognitive robotics, including neurorobotics, developmental robotics, evolutionary robotics, swarm robotics, and soft robotics, each contribute unique insights and capabilities, driving the field forward and opening new frontiers in both research and practical applications.

References:

Cognitive Robotics Edited by Angelo Cangelosi and Minoru Asada

Image source: Dall-e

Save $500 on this robot vacuum and mop to keep your floors sparkling

Ecovacs Deebot X2 Omni

What's the deal?

This is a great time to get spring cleaning supplies and the Ecovacs Deebot X2 Omni is one of them, available now for $999 (save $500).

Why this deal is ZDNET-recommended

You know that gratifying feeling of coming home to a clean house? With a family of five, that's not a feeling I often get, if at all. Enter the Ecovacs Deebot X2 Omni.

Also: Ecovacs announced a new robot vacuum that squares up to the competition

I've tested a fair share of robot vacuum and mop combinations, so I quite appreciate the experience of having a robot roaming around my home that picks up crumbs, dust, and everything in between. But the Deebot X2 Omni is the best robot vacuum and mop I've tried.

View at Amazon

Ecovacs launched the Deebot X2 Omni today, a new flagship robot vacuum and mop combo with a clear edge. After testing it out for a couple of weeks, I've found room for improvement in some tasks — largely outweighed by its long list of strengths.

The X2 Omni checks all the specs boxes for a high-end robot vacuum and mop. It has 8,000Pa of suction power, higher than the 6,000Pa of the current market leader, the Roborock S8 Pro Ultra. Using artificial intelligence (AI), the robot can detect and avoid objects strewn about the floor, such as socks and charging cables, and has a mopping pad that automatically lifts 15mm when carpets or rugs are detected.

Also: The best robot mops you can buy

The Omni station charges the robot vacuum and mop and works as a base where it empties its dustbin and self-washes and dries its mop pads. This feature means you only have to worry about keeping the base station's clean water tank filled and its dirty water tank empty, which you must complete every few cleaning cycles.

Designed to be a hands-free experience, the base station is also self-cleaning. Running the self-cleaning option in the Ecovacs app will clean the base plate in the station — the spot where your mops are cleaned that typically sees water and dirt accumulation. This feature is a level above competitors like Yeedi, which requires users to periodically clean dirty water at the bottom of the docking station.

The dust bag holds everything the Deebot X2 sweeps from your floors and only needs emptying about once a month, although your mileage may vary.

This closure is supposed to hold four liters of clean water when you carry the clean water tank by the handle.

One of my only gripes is that the clean water tank feels awkward to hold when filled — it almost feels like it's not built to last, although I won't know for certain until I've used it for several months. It's a four-liter water tank with a handle to carry it on the lid, held shut by a plastic clip. I hold the tank from the bottom because I feel like using the handle to carry the full tank around will result in the closure failing and four liters of water going everywhere.

About the square shape

The Deebot X2 Omni has several superpowers, starting with its compact package. The squared edges stood out to me as a feature when I unpacked the device, along with how narrow and short it was. At only 12.6 inches wide, it's about 0.3 inches narrower than the Eufy X9 Pro robot vacuum mop, which had been my super mop until the X2 Omni arrived.

Although 0.3 inches sounds like a small difference in size, it's proven to be considerable when a robot has to navigate through furniture legs. Case in point: the Eufy X9 Pro uses AI to avoid objects, but whenever I sent it to clean the first floor, it'd get stuck between the kitchen barstools legs. The stools are fairly lightweight, so the robot would drag them around instead of signaling it was stuck. I'd see my kitchen barstools gliding around my floor or randomly find one hanging out by the shoe bench.

Also: The best iRobot vacuums

This isn't a big deal and is highly subjective, so it's not something I included in my Eufy review; it's not the robot's fault that it's the exact size as the width of the distance between my barstool's legs. But the narrower Deebot X2 Omni can clean under the barstools and figure its way back out, which means no more 'guess where the barstools are today' games.

The Ecovacs Deebot X2 Omni making its way out of the traveling barstools.

The Deebot X2 is also almost an inch shorter than my Eufy robot vacuum, at 3.7 inches in height. The lower dimensions and narrow build allow the Deebot X2 to clean in places other robots typically can't reach or navigate under.

Some AI-powered features

The Deebot X2 leverages Ecovacs' AIVI 3D 2.0 and combines an AI processor with 3D-structured light sensors with dual-laser LiDAR technology. The result is efficient maps that allow the robot to detect objects during navigation and clean around them intelligently. This feature set means you won't have to ensure your floors are free of charging cables, toys, or shoes before sending out the X2.

The AI-powered navigation and obstacle avoidance, backed by Ecovacs' proprietary AINA Model, uses visual recognition and reinforcement learning based on sensor information.

Also: 6 things to know about robot vacuums before you buy one

The Deebot X2's clever technology also makes for a customized cleaning process if that's your thing. The device's AI-powered visual recognition, ability to detect floor type, and historical cleaning logs let the robot infer which room it's cleaning, such as the kitchen, living room, or bedroom, and adjust its suction power and mopping mode.

A new level of voice control

Voice control makes everything in my home easier. Countless robot vacuums let you use a third-party virtual assistant for voice control, such as Amazon Alexa, Google Assistant, or Siri. Saying, "Alexa, clean the floors" in my house dispatches the Eufy X9 Pro to clean my bedroom and hallway. However, these assistants are limited in the functions they can make the robot perform.

Sure, you can dispatch your robot with Alexa or Google, but have you ever been able to tell it to "turn right, move three meters forward, turn left, and clean there"?

Also: This robot vacuum connects to your home's water supply for full automation

Ecovacs robot vacuums have a built-in voice assistant named YIKO that users can talk with to control the robot directly — and it works swimmingly. Saying "OK, YIKO" wakes up the voice assistant. If your robot is out cleaning, you can ask it to return and clean the dining room again or give it multiple commands in one sentence without pulling up the app.

ZDNET's buying advice

The Ecovacs Deebot X2 Omni is the company's new flagship robot with all the smart features and a price to match, at $1,500, though $500 off right now after Cyber Monday. Over the past few weeks, it's gained a top-dog position in our home, becoming the main robot to clean the downstairs floor — and that's saying a lot.

The great thing about an all-in-one, self-emptying, and self-cleaning robot vacuum and mop is that it's not best suited for some circumstances — it's suited for all. Some mid-range models might be great at mopping but suffer from not having strong or effective suction, making them best suited for homes with hard floors. Others might boast great suction power, okay mopping, and short battery life, making them best for mostly carpeted apartments or small homes.

The Deebot X2 Omni is great at all of these things. The biggest challenge in our home is downstairs because it's mostly hardwood and tile with some area rugs — it's where the dog comes in and out from the yard, where we cook, and where the toddler drops most of the crumbs.

Also: Skip the Dyson: This $150 stick vacuum is just as powerful (and can mop, too)

The X2 Omni's MSRP of $1,500 compares to $1,600 for the Roborock S8 Pro Ultra (also discounted right now at $600 off). Suppose I were looking for a hands-free robot vacuum and mop suitable for my home's complex needs. In that case, I'd have to choose the Deebot X2 Omni over the Roborock's flagship because the extra features, like the self-cleaning station and stronger suction, set it apart.

Contextualize your business data with content orchestration techniques

Contextualize your business data with content orchestration techniques

Image by Carlos Ngaba from Pixabay

Contextualize your business data with content orchestration techniques

One of the major issues enterprises have is tapping into business information that’s trapped in many different siloed applications. Customer data platforms (CDPs) are supposed to unify structured data about customers from a number of these siloed applications and make that information accessible to a broad range of users.

But what about content? Textual content because it behaves differently in digital form is disconnected from images and video, which are in turn disconnected from customer transactional data.

According to Michael Andrews, an independent consultant and former content strategist at Kontent.ai, more advanced, API-centric enterprises could work across all of these silos, orchestrating both content and data resources so they could be custom assembled and delivered to a range of different external and internal consumers.

Such a metasystem or “system of systems” can harness content and data teams together to solve internal business issues as well as improve customer outreach.

Contextualize your business data with content orchestration techniques

Michael Andrews, 2024. Used with permission.

Devil in the orchestration details

Andrews gave a talk entitled “Understanding The Need For Content Orchestration” in May 2024. Content Wrangler Scott Abel hosted the Brighttalk event. Andrews underscored that content orchestration is not easy, and that a product taxonomy, for example, needs to be designed to work across multiple systems.

That’s an understatement. Really what’s needed for cross-enterprise content + structured data orchestration at scale is a tiered grouping of ontologies and taxonomies that disambiguate and connect the different business contexts in a logically consistent way. You need these levels of abstraction to work across described contexts. If you want visibility across the supply chain, it’s even more important to design your knowledge graph to be logically consistent, subgraph by subgraph.

Take the challenge of industry and product segmentation, for example. It’s easy to get off on the wrong track when just working with established taxonomies that have been in use for years.. Ex-corporate planner Alan Michaels of Industry Building Blocks LLC (IBB) has worked for years on addressing such a problem. Michaels developed an industry + product taxonomy based on Michael Porter’s Five Forces classification approach that’s designed to fix the problems with the North American Industry Classification System (NAICS).

If you stay with NAICS, it’s not possible to get a clear, consistent, up-to-date picture of what products and services the companies are selling because of this misguided, thoroughly inconsistent classification scheme. If you’re trying to forecast demand for your products and assess the competitive marketplace for those products, you’ll encounter frequent misalignment. You’ll be comparing apples to oranges.

Whoever tries to use NAICS either decides not to use it, or uses it with all its flaws and suffers the consequences. What’s worse is that NAICS, through the people who do end up using it, misleads the business public at large every day, at scale.

Michaels became interested in ontologies and web semantics as a means of logically connecting and scaling data contexts across industries and supply networks. I introduced him to Dave McComb at Semantic Arts, whose team developed and built IndustryKG, an RDF ontologized evolution of IBB’s taxonomy. IndustryKG is now commercially available. (Full disclosure: I have a working relationship with Semantic Arts.)

It’s my firm belief that the only way to effectively orchestrate the content and structured data resources of organizations is to take the same approach that IndustryKG embodies, with all data and content you intend to share, period. If your data, structured or unstructured, isn’t logically and consistently connected, it can’t be readily useful as a shared resource.

How organizations can undermine their own orchestration efforts

The existing software in a typical organization’s portfolio presents a serious issue: it tends to reinforce organizational silos with data silos and code fragmentation that themselves echo and perpetuate those silos. In the case of Andrews’ composable metasystem illustration, customer-facing systems and business operations systems are managed by different departments.

Large enterprises have content, knowledge management and data management managed separately. Software that manages information resources most often is designed to handle content, or structured data separately; not both. Software is most often designed to divide and conquer, rather than unify.

Teams responsible for orchestration have to work across these departments. They have to deal with the data and content cartels of the kind that prosper under passive leadership. Orchestration initiatives will often suffer delays resulting from attempts to gain access permission to content and data repositories.

In an era when the demand to mine useful data and content is rising sharply, most leadership is still investing in software as a service that perpetuates old ways of working by increasing data siloing and logic fragmentation. A well-designed application programming interface (API) layer can be helpful, but leadership is working at cross purposes if they blindly incur a higher integration tax year after year by adding more application suites. (See “Ten reasons organizations pay more in data integration tax every year at https://www.datasciencecentral.com/ten-reasons-organizations-pay-more-in-data-integration-tax-every-year/ for more information.)

In that sense, the need for orchestration becomes yet another argument for fundamental data layer transformation. Fundamental transformation tackles the organizational issues, the architectural shortfalls and the data and content problems holistically.