The Rise of Mixture of Experts LLMs

In the past week, we saw several ‘Mixture of Experts’ models coming in, like Databricks DBRX, AI21 Labs’ Jamba, xAI’s Grok-1, and Alibaba’s Qwen 1.5, with Mixtral 8X 7B already in the mix, making MoE popular.

Welcome to MOE's, which model would you like to use today from our new and updated menu?
XXL @DbrxMosaicAI
XL @MistralAI
Medium @AI21Labs
Small @Alibaba_Qwen
cc @code_star @JustinLin610 @tombengal_ pic.twitter.com/Ms7PCXiULv

— Alex Volkov (Thursd/AI) (@altryne) March 28, 2024

Decoding Mixture of Experts

A Mixture of Experts (MoE) model is a type of neural network architecture that combines the strengths of multiple smaller models, known as ‘experts’, to make predictions or generate outputs. An MoE model is like a team of hospital specialists. Each specialist is an expert in a specific medical field, such as cardiology, neurology, or orthopaedics.

With respect to Transformer models, MoE has two key elements – Sparse MoE Layers and a Gate Network.

Sparse MoE layers represent different ‘experts’ within the model, each capable of handling specific tasks. The gate network functions like a manager, determining which words or tokens are assigned to each expert.

MoEs replace the feed-forward layers with Sparse MoE layers. These layers contain a certain number of experts (e.g. 8), each being a neural network (usually an FFN).

Breaking Down Popular MoEs

Databricks DBRX uses a fine-grained mixture-of-experts (MoE) architecture with 132B total parameters, of which 36B are active on any input. It stands out among other open MoE models, such as Mixtral and Grok-1, because it employs a fine-grained approach.

A fine-grained mixture of expert models further breaks down the ‘experts’ to perform extremely specific subtasks, splitting the FFNs into smaller components. This can result in many small experts (even hundreds of experts), and then you can control how many experts you want to be activated.

The idea of fine-grained experts was introduced by DeepSeek-MoE.

Specifically, DBRX has 16 experts and selects four of them, whereas Mixtral and Grok-1 each have eight experts and choose two. According to Databricks, this provides 65x more possible combinations of experts.

xAI recently open-sourced Grok 1, which is a 314B parameter Mixture-of-Experts model with 25% of the weights active on a given token, which means at a time it uses 78 billion parameters.

Whereas, AI21 Labs’ Jamba is a hybrid decoder architecture that combines Transformer layers with Mamba layers, a recent state-space model (SSM), along with a mixture-of-experts (MoE) module. The company refers to this combination of three elements as a Jamba block.

Jamba applies MoE at every other layer, with 16 experts and uses the top-2 experts at each token. “The more the MoE layers, and the more the experts in each MoE layer, the larger the total number of model parameters,” wrote AI21 Labs in Jamba’s research paper.

Jamba uses MoE layers to only use 12 billion out of its total 52 billion parameters during inference, making it more efficient than a Transformer-only model of the same size.

Jamba looks very impressive! It’s technically smaller than Mixtral yet shows similar performance on benchmarks and has a 256k context window,” shared a user on X.

Alibaba recently released Qwen1.5-MoE which is a 14B MoE model with only 2.7 billion activated parameters. It comes with a total of 64 experts, representing an 8-time increase compared to the conventional MoE setup of eight experts.

Similar to DBRX, it also employs a fine-grained MoE architecture where Alibaba has partitioned a single FFN into several segments, each serving as an individual expert.

“DBRX is good for enterprise applications, but the Qwen MoE is a cool and great toy to play with,” wrote a user on X.

Mixtral 8X7B is a sparse mixture-of-experts network. It is a decoder-only model where the feedforward block selects from eight distinct parameter groups. At each layer, for every token, a router network chooses two of these groups (the ‘experts’) to process the token. It has 47B parameters but uses only 13B active parameters during inference

Why Choose MoE?

“Mixture-of-Experts will be Oxford’s 2024 Word of the Year,” quipped a user on X. However, jokes aside the reason today MoE models are getting popularity is that they enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model.

In an MoE model, not all parameters are active or used during inference, even though the model might have many parameters. This selective activation makes inference much faster compared to a dense model that uses all parameters for every computation. However, there’s a trade-off in terms of memory requirements because all parameters must be loaded into RAM, which can be high.

As the need for larger and more capable language models increases, the adoption of MoE techniques is expected to gain momentum in the future.

The post The Rise of Mixture of Experts LLMs appeared first on Analytics India Magazine.

Genpact Appoints Piyush Mehta to Lead AI-First Strategy in India

Genpact Appoints Piyush Mehta to Lead AI-First Strategy in India

Genpact has announced a significant expansion in the role of its Chief Human Resources Officer (CHRO), Piyush Mehta. Effective immediately, Mehta will also assume the responsibilities of Country Manager for India, signalling the company’s deepening commitment to innovation, growth, and talent development in the region.

The decision reflects Genpact’s steadfast dedication to India and its strategic shift towards becoming an AI-first organisation. In the previous year, Genpact established and expanded three new operational centres in Tier 3 cities across India, such as Madurai, Jodhpur, and Warangal.

This expansion not only broadened the company’s talent pool and geographic footprint but also contributed significantly to the local economic landscape.

BK Kalra, President, and CEO of Genpact emphasised the importance of India as a critical talent market, stating, “India remains a strategic talent market for Genpact, and we believe people are the greatest assets for us and our clients.” He praised Mehta’s extensive experience in nurturing talent ecosystems over the past 25 years and expressed confidence in his ability to bolster Genpact’s presence in India.

In his former role as CHRO, Mehta played a pivotal role in shaping Genpact’s human resources strategy. Now, with his expanded responsibilities, he aims to leverage his expertise to further drive the company’s AI-first approach in India, delivering value to key stakeholders while continuing to oversee the global HR function.

Expressing enthusiasm for his new role, Mehta affirmed, “India’s talent has always been a prime differentiator for economic growth in the country. I am excited to take on this expanded role as we continue to grow our business, empower India’s talented workforce, and contribute to the country’s economic landscape.”

The post Genpact Appoints Piyush Mehta to Lead AI-First Strategy in India appeared first on Analytics India Magazine.

The Tech Needed to Survive This Decade’s ‘Seismic’ APAC B2B Trends

The business-to-business market will see a number of big changes in the years to 2030, according to a new report from customer experience firm Merkle. APAC regional B2B enterprises will need to consider their levels of investment in a number of technologies and integrating new tools now to prepare for and adapt to the coming changes.

The B2B Futures: The View From 2030 report argues four key “seismic” trends are coming to B2B:

  • A rise in machine-to-machine commerce.
  • Enhanced supply chain traceability.
  • The dominance of B2B digital marketplaces.
  • Radically accelerated speed-to-market.

Jake Hird, vice president of strategy, Merkle B2B – APAC, told TechRepublic B2B enterprises in the region will need to respond with investment in technologies including IoT, AI, data analytics and blockchain to ensure they adapt to these shifts hitting their businesses and markets.

IoT to facilitate a rise in machine-to-machine commerce

Machine-to-machine commerce will rise to account for a third of all B2B business by 2030, Merkle said. In practice, this will see the extension of today’s automated decision-making tools — like replenishment systems for retailers that automate the purchase of new inventory from factories — into more complex but still commodity decisions, supported by AI.

Photo of Jake Hird.
Jake Hird, Vice President of Strategy, Merkle B2B, APAC
Image: Merkle

Hird said this trend would require B2B businesses to increasingly prioritise investments in things like IT infrastructure, AI and machine learning tools, blockchain technology and cyber security.

Internet of Things

The growth of machine-to-machine commerce will be heavily dependent on the embrace and deployment of IoT tools, which will need to be embedded throughout the B2B market. “IoT devices, sensors, and networks will form the backbone of m2m commerce,” Hird said.

While acknowledging unsteady growth to date in the IoT market, Merkle said IoT has matured. Merkle’s report predicted IoT devices would soon be a key source of data for B2B businesses needing to “identify and forecast business needs, ranging from potential out of stocks to the degradation of equipment that may need replacement — and transact accordingly.”

Blockchain and smart contracts

Machines will have the means to transact with other machines using blockchain. “Blockchain technology and smart contracts will ensure secure and transparent transactions, enabling machines to execute agreements without human intervention,” Hird said.

DOWNLOAD: 50+ Tech Glossaries from TechRepublic Premium

Edge computing infrastructure

B2B enterprises will need to invest in edge computing infrastructure to support more real-time data processing and purchasing transactions across their footprints and supply chains.

Data management and integration platforms

B2B businesses will need to collect, process and analyze more information, making investment in data management important. This will include overcoming integration challenges and leveraging the interoperability of systems to generate the required insights to feed systems.

Cyber security systems

Cyber security solutions will be crucial to safeguarding transactions from unauthorised access, as well as other online threats, according to Merkle. “Businesses will need to invest in measures such as intrusion detection systems and advanced encryption technologies,” Hird said.

Blockchain and distributed ledger tech to deliver supply chain traceability

Supply chain traceability could become a top two purchase driver for B2B by 2030, on the back of consumer and market pressure. This will see blockchain and distributed ledger technology adoption rise as businesses seek to deepen the transparency and trust of their supply chains.

Blockchain and distributed ledger technology

Merkle’s report suggests that blockchains, the most common form of distributed ledger technology, could help “shine light on byzantine global supply chains” by providing access to certification data, sourcing practices and environmental impact — even calculating carbon footprints. These technologies could help businesses enforce sustainability standards.

RFID tags and IoT

The availability, decreasing cost and miniturisation of RFID tags and IoT sensors will see IoT play a critical role in traceability. This is expected to enable real-time tracking and monitoring of products as they progress through the supply chain, from sourcing right through to sale.

Data analytics and AI tools

B2B players will need data analytics and AI to derive insights from the data generated by supply chain traceability systems. “Through real-time analysis, businesses can optimise inventory management, anticipate demand fluctuations, and mitigate supply chain risks,” Hird said.

Integration readiness to support the rise of B2B digital marketplaces

B2B digital marketplaces are expected to capture 50% of B2B business by 2030, up from 15% in 2024. This shift will drive B2B organisations to focus on implementing e-commerce platforms to develop a presence within growing digital marketplaces or dive in and build their own.

Analytics and personalisation tools

Analytics and personalisation will allow businesses to derive insights into customer behavior and preferences, Hird said. This will help B2B businesses adjust marketing and communications for individual B2B buyers, improving customer experience, engagement and revenue.

Integration and API solutions

Digital marketplaces rely on the integration of systems to facilitate customer transactions and buying experiences. Businesses will need to invest in integration and API solutions to connect internal and external systems and platforms to streamline operations and enhance efficiency.

Supply chain optimisation tech

Digital marketplace models also require B2B companies to meet demands like faster delivery times and efficient order fulfillment from their marketplace presence, Hird said. He argued this will encourage B2B businesses to adopt more supply chain optimisation technologies.

Design and prototyping tools to accelerate B2B speed-to-market

Major changes are expected in the way B2B brands design, test and deliver goods to market. For example, in pharmacology, Merkle said although it can take 10 to 15 years to bring a drug to market, faster drug discovery and clinical trials could shorten this process dramatically.

Generative AI and virtual prototyping

Functional product and prototype design processes can be supercharged with generative AI and virtual prototyping technologies, Hird told TechRepublic. By using simulations and design tools that augment human contributions and traditional methods, businesses will be able to significantly reduce the time and cost associated with physical prototyping and testing.

“This enables faster iteration cycles, accelerating the product development process and enhancing speed to market for new goods and innovations,” Hird said.

Prisma AI Has an ‘Eye on You’ at Adani Airports

Prisma AI

Last year, Prisma AI, a global company providing visual AI-based solutions, partnered with Adani airports to set up ‘Desk of Goodness’ where the company looks to provide swift assistance to passengers in need by alerting their support staff through monitoring.

“The concept is to aid passengers. Anybody carrying a baby, or walking with a crutch, or if someone falls, we want to capture these situations and trigger help to our on-the-ground support staff, who would be having a tablet to see the visuals and the location of the passenger,” said Amitabh Chowdhury, executive director and COO, Prisma AI, in an exclusive interaction with AIM.

Not to Worry, Your Data is Safe

The AI for Humanitarian Goodness project at Adani airports deals with vast amounts of private data, where their camera captures information and sends it to a server for analysis. Once the server detects something significant, it notifies a ground staff member via a tablet. However, the notification is not sent through the most preferred service provider.

In other words, Prisma AI has set up an independent notification server at Adani to bypass Google’s control and ensure connectivity.

“90% of the world’s notifications on your phones go through Google servers only. Google has a monopoly on notifications. They [Adani] clearly said that they will not give internet access. So we have our own notification server, which we implemented for them over there, and those notifications are only going in through their local Wi-Fi network.”

The ‘desk of goodness’ service is available across six airports, including Lucknow, Ahmedabad, Mangalore, and Trivandrum.

Apart from airport service, Prisma has primarily serviced security, finance, infrastructure, and road assistance, including name plate recognition. The company has built its proprietary computer vision technology under the name ‘Gryphos.’

Inside Gryphos

Gryphos serves as the core computer vision platform for the company. It utilises deep convolutional neural networks, enables comprehensive analysis of videos, images, objects, faces promoting analytics and predictive capabilities. Prisma has over 100+ global deployments across five continents with over 21k cameras running Gryphos.

“Gryphos has various core engines, and feature engines, and over the years, we have started building derivative products out of these core engines, something like ‘Veri5,” said Chowdhury. The product is a face authentication system which is finding use in financial institutions and security solutions.

The accuracy of results via their platforms are set according to the sectors they cater to. “For instance, in a banking transaction, I cannot afford to make a mistake, where face recognition is required to proceed to the next step. So. I will set the cutoff at a higher percentage, say 75%. Whereas, if a police is searching for a lost child, I set it at 30%. It doesn’t matter if they get five or 10 different lost children. I don’t want to miss out on that one truly lost case. So, the use case defines the use data parameterisation,” said Chowdhury.

Evolution of Prisma AI

In the late 1990s, Prisma was originally headquartered in Germany. In 2017, the company was acquired by Prisma India, and the parent company’s headquarters moved to Singapore. However, Prisma Global India has a core development team that works out of Mumbai.

Prisma’s project, assisting Interpol in Germany to track down paintings and artefacts stolen by the Nazis towards the end of World War II, was among the first use cases.

Prisma AI has also worked on a project to piece together East Germany residents’ torn and shredded letters that were kept in 16000 gunny bags. Only when the Berlin Wall came down did this problem come to light, and the National Archives requested it be put together.

“It was this whole exercise of a jigsaw puzzle. One bag might have had a thousand-odd letters. They kind of set up a conveyor belt system and put up a camera at one end, and each of those pieces was snapped. Our algorithm kicked in to try and match and put it together. This was kind of the genesis of visual AI if I can call it in those days, and since then, we have moved on to many things,” recalled Chowdhury.

Walking the AI Talk

Unlike many other companies in the space, Prisma AI is known for successfully applying AI in real-word scenarios.

“Unfortunately, even till today, 90% of AI projects around the world do not succeed, and, it’s more of an academic interest, to a large extent, for many of those guys. So, the Googles, and the Amazons, may have made it open source, but those codes are not really being applied that much in the real world scenario,” he said, believing that these are instances where Prisma emerges as a differentiator.

Chowdhury also observes a major shift in how businesses approach AI of late. “I don’t have to sell AI, as a concept, as much as I did, say, five to eight years ago. The moment I say the word ‘AI,’ everybody is interested,” he quipped.

In January, Prisma AI announced its partnership with India’s Pro-Kabaddi League team, Jaipur Pink Panthers. The collaboration will improve the in-stadium fan experience and streamline and enhance venue security. The company is also working with a few partners in the US and Mexico to bring a similar experience to stadiums.

The post Prisma AI Has an ‘Eye on You’ at Adani Airports appeared first on Analytics India Magazine.

X makes Grok chatbot available to premium subscribers

X makes Grok chatbot available to premium subscribers Ivan Mehta 7 hours

Social network X is rolling out access to xAI’s Grok chatbot to Premium tier subscribers after Elon Musk announced the expansion to more paid users last month. The company said on its support page that only Premium and Premium+ users can interact with the chatbot in select regions.

Last year, after Musk’s xAI announced Grok, it made the chatbot available to Premium+ users — people who are paying $16 per month or a $168 per year subscription fee. With the latest update, users paying $8 per month can access the chatbot.

Users can chat with Grok in a “Regular mode” or a “Fun mode”. Just like any other Large Language Model (LLM) product, Grok shows labels indicating that the chatbot would return inaccurate answers.

We have already seen some examples of that. Earlier this week, X rolled out a new explore view inside Grok where the chatbot summarizes trending news stories. Notably, Jeff Bezos and NVIDIA-backed Perplexity AI also summarize news stories.

Grok now summarizes all the trending news and topics.

You can access it from the explore page and Grok's home screen. pic.twitter.com/j1gkoif8eV

— DogeDesigner (@cb_doge) April 5, 2024

However, Grok seems to go one step further than just summarizing stories by writing headlines. As Mashable wrote, the chatbot wrote a fake headline saying “Iran Strikes Tel Aviv with Heavy Missiles.”

Musk likely wants more people to use Grok chatbot to rival other products such as OpenAI’s ChatGPT, Google’s Gemini, or Anthropic’s Claude. Over the last few months, he has been openly critical of OpenAI’s operations. Musk even sued the company in March over the “betrayal” of its non-profit goal. In response, OpenAI filed papers seeking the dismissal of all of Musk’s claims and released email exchanges between the Tesla CEO and the company.

Last month, xAI open-sourced Grok but without any training data details. As my colleague Devin Coldewey argued, there are still questions about whether this is the latest version of the model and if the company will be more transparent about its approach to the development of the model and information about the training data.

Why Elon Musk’s AI company ‘open-sourcing’ Grok matters — and why it doesn’t

Is AI ‘Copilot’ a Generic Term or a Brand Name?

The term “copilot” for AI assistants seems to be everywhere in enterprise software today. Like many things in the generative AI industry, the way the word is used is changing. Sometimes it is capitalized, and sometimes it is not. GitHub’s choice of Copilot as a brand name was the first major use, followed by Microsoft naming its separate flagship AI assistant Copilot. Then, the term copilot rapidly became generic. In common use, an AI copilot is a generative AI assistant, usually a large language model trained for a specific task.

Confusion over a term could lead to some customers not knowing whether what they’re getting is a Microsoft product, for example. But Microsoft doesn’t seem to be seeking ownership over the word copilot, as a lot of other companies use it. The term copilot originated with flight and implies a competent right-hand person for a highly skilled professional.

Here’s what you need to know about some of the many varieties of AI copilot.

What is Microsoft Copilot?

Microsoft Copilot is an umbrella term for a variety of generative AI and chatbot products now available throughout Microsoft productivity software. For business users, we have a guide to differentiating Microsoft Copilot’s various iterations and new Copilot features and integrations.

Microsoft uses two constructions for Copilot product names: “in” or “for”

In TechRepublic’s cheat sheet about Microsoft Copilot, note Copilot for Security and Copilots for Finance, Sales and Service, which are likely to be purchased separately for specific uses or departments. This is an interesting case of Microsoft using its own brand name in two ways at once (even after all the renaming Copilot has gone through): the Copilots for offer very similar, but more industry-specific, capabilities compared to the Copilots in — for example, Copilot in Word can help with any writing task, while Copilot for Security integrates with specific security products.

SEE: Copilot in Bing used to be called Bing Chat before Microsoft unified its brand names somewhat. (TechRepublic)

What is GitHub Copilot?

GitHub released its Copilot product in 2021 (GitHub had already been acquired by Microsoft at this time). GitHub Copilot generates code based on a developer’s existing code; it’s intended as an AI version of pair programming. The original GitHub Copilot was built on OpenAI Codex, a variant of the then-current GPT-3. GitHub came full circle on generative AI with the addition of a chatbot to its newest iteration, GitHub Copilot X.

Microsoft Copilot vs GitHub Copilot

Microsoft Copilot and GitHub Copilot have different primary use cases. GitHub Copilot is for coding specifically, while Microsoft Copilot integrates with a lot of different business software. GitHub Copilot reads code, not natural language, and integrates into a code editor; Microsoft Copilot uses natural language and sits alongside a variety of Microsoft products. On the other hand, Microsoft Copilot can be used to write code in some instances, such as on Power Pages when integrated with Visual Studio Code.

Microsoft Copilot for business starts at $30.00 per user per month with a Microsoft 365 Business Standard or Microsoft 365 Business Premium license.

GitHub Copilot starts at $10 per user per month.

What are other Copilot products?

Salesforce is one non-Microsoft proponent of Copilot as a brand name. Einstein Copilot, released in February 2024, works across Salesforce’s data cloud, AI and customer relationship management software-as-a-service offerings.

Business process automation software company Appian calls its generative AI sidekick Copilot. One sales prospecting software company named itself Copilot AI, but it isn’t selling a generative AI bot — instead, it offers predictive responses to LinkedIn conversations and campaigns.

There are many more companies using Copilot to indicate a generative AI boost for their services.

SEE: There are several reasons why businesses or individual users might want to disable the Microsoft Copilot features that come with Windows 11. (TechRepublic)

Can copilot be used as a generic term?

For now, “copilot” is a flexible word for both generic and brand-specific AI chatbot products for specific business uses. For example, Microsoft Copilot is a copilot. What “copilot” refers to or how an AI chatbot is named may be different depending on the organization. The common uses of the term indicate the Wild West period of AI we are in, showing both that professionals are still working on ways to use generative AI for business and that generative AI is settling into an “assistant” role in the form of chatbots tailored to specific products and applications.

You will likely see the word copilot written in lowercase to indicate the generic version of AI assistants. The people making uppercase Copilot infrastructure have embraced the generic version of the term, too: NVIDIA CEO Jensen Huang used copilot as a generic term at NVIDIA GTC, as did many companies on the conference show floor.

Other companies seem to be staying away from the term: IBM calls its watsonx AI sidekick an Assistant, as does Databricks with its Databricks Assistant.

U.K. and U.S. Agree to Collaborate on the Development of Safety Tests for AI Models

The U.K. government has formally agreed to work with the U.S. in developing tests for advanced artificial intelligence models. A Memorandum of Understanding, which is a non-legally binding agreement, was signed on April 1, 2024 by the U.K. Technology Secretary Michelle Donelan and U.S. Commerce Secretary Gina Raimondo (Figure A).

Figure A

U.S. Commerce Secretary Gina Raimondo and U.K. Technology Secretary Michelle Donelan.
U.S. Commerce Secretary Gina Raimondo (left) and U.K. Technology Secretary Michelle Donelan (right). Source: UK Government. Image: U.K. government

Both countries will now “align their scientific approaches” and work together to “accelerate and rapidly iterate robust suites of evaluations for AI models, systems, and agents.” This action is being taken to uphold the commitments established at the first global AI Safety Summit last November, where governments from around the world accepted their role in safety testing the next generation of AI models.

What AI initiatives have been agreed upon by the U.K. and U.S.?

With the MoU, the U.K. and U.S. have agreed how they will build a common approach to AI safety testing and share their developments with each other. Specifically, this will involve:

  • Developing a shared process to evaluate the safety of AI models.
  • Performing at least one joint testing exercise on a publicly accessible model.
  • Collaborating on technical AI safety research, both to advance the collective knowledge of AI models and to ensure any new policies are aligned.
  • Exchanging personnel between respective institutes.
  • Sharing information on all activities undertaken at the respective institutes.
  • Working with other governments on developing AI standards, including safety.

“Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance,” Secretary Raimondo said in a statement.

SEE: Learn how to Use AI for Your Business (TechRepublic Academy)

The MoU primarily relates to moving forward on plans made by the AI Safety Institutes in the U.K. and U.S. The U.K.’s research facility was launched at the AI Safety Summit with the three primary goals of evaluating existing AI systems, performing foundational AI safety research and sharing information with other national and international actors. Firms including OpenAI, Meta and Microsoft have agreed for their latest generative AI models to be independently reviewed by the U.K. AISI.

Similarly, the U.S. AISI, formally established by NIST in February 2024, was created to work on the priority actions outlined in the AI Executive Order issued in October 2023; these actions include developing standards for the safety and security of AI systems. The U.S.’s AISI is supported by an AI Safety Institute Consortium, whose members consist of Meta, OpenAI, NVIDIA, Google, Amazon and Microsoft.

Will this lead to the regulation of AI companies?

While neither the U.K. or U.S. AISI is a regulatory body, the results of their combined research is likely to inform future policy changes. According to the U.K. government, its AISI “will provide foundational insights to our governance regime,” while the U.S. facility will “​develop technical guidance that will be used by regulators.”

The European Union is arguably still one step ahead, as its landmark AI Act was voted into law on March 13, 2024. The legislation outlines measures designed to ensure that AI is used safely and ethically, among other rules regarding AI for facial recognition and transparency.

SEE: Most Cybersecurity Professionals Expect AI to Impact Their Jobs

The majority of the big tech players, including OpenAI, Google, Microsoft and Anthropic, are based in the U.S., where there are currently no hardline regulations in place that could curtail their AI activities. October’s EO does provide guidance on the use and regulation of AI, and positive steps have been taken since it was signed; however, this legislation is not law. The AI Risk Management Framework finalized by NIST in January 2023 is also voluntary.

In fact, these major tech companies are mostly in charge of regulating themselves, and last year launched the Frontier Model Forum to establish their own “guardrails” to mitigate the risk of AI.

What do AI and legal experts think of the safety testing?

AI regulation should be a priority

The formation of the U.K. AISI was not a universally popular way of holding the reins on AI in the country. In February, the chief executive of Faculty AI — a company involved with the institute — said that developing robust standards may be a more prudent use of government resources instead of trying to vet every AI model.

“I think it’s important that it sets standards for the wider world, rather than trying to do everything itself,” Marc Warner told The Guardian.

A similar viewpoint is held by experts in tech law when it comes to this week’s MoU. “Ideally, the countries’ efforts would be far better spent on developing hardline regulations rather than research,” Aron Solomon, legal analyst and chief strategy officer at legal marketing agency Amplify, told TechRepublic in an email.

“But the problem is this: few legislators — I would say, especially in the US Congress — have anywhere near the depth of understanding of AI to regulate it.

Solomon added: “We should be leaving rather than entering a period of necessary deep study, where lawmakers really wrap their collective mind around how AI works and how it will be used in the future. But, as highlighted by the recent U.S. debacle where lawmakers are trying to outlaw TikTok, they, as a group, don’t understand technology, so they aren’t well-positioned to intelligently regulate it.

“This leaves us in the hard place we are today. AI is evolving far faster than regulators can regulate. But deferring regulation in favor of anything else at this point is delaying the inevitable.”

Indeed, as the capabilities of AI models are constantly changing and expanding, safety tests performed by the two institutes will need to do the same. “Some bad actors may attempt to circumvent tests or misapply dual-use AI capabilities,” Christoph Cemper, the chief executive officer of prompt management platform AIPRM, told TechRepublic in an email. Dual-use refers to technologies which can be used for both peaceful and hostile purposes.

Cemper said: “While testing can flag technical safety concerns, it does not replace the need for guidelines on ethical, policy and governance questions… Ideally, the two governments will view testing as the initial phase in an ongoing, collaborative process.”

SEE: Generative AI may increase the global ransomware threat, according to a National Cyber Security Centre study

Research is needed for effective AI regulation

While voluntary guidelines may not prove enough to incite any real change in the activities of the tech giants, hardline legislation could stifle progress in AI if not properly considered, according to Dr. Kjell Carlsson.

The former ML/AI analyst and current head of strategy at Domino Data Lab told TechRepublic in an email: “There are AI-related areas today where harm is a real and growing threat. These are areas like fraud and cybercrime, where regulation usually exists but is ineffective.

“Unfortunately, few of the proposed AI regulations, such as the EU AI Act, are designed to effectively tackle these threats as they mostly focus on commercial AI offerings that criminals do not use. As such, many of these regulatory efforts will damage innovation and increase costs, while doing little to improve actual safety.”

Many experts therefore think that the prioritization of research and collaboration is more effective than rushing in with regulations in the U.K. and U.S.

Dr. Carlsson said: “Regulation works when it comes to preventing established harm from known use cases. Today, however, most of the use cases for AI have yet to be discovered and nearly all the harm is hypothetical. In contrast, there is an incredible need for research on how to effectively test, mitigate risk and ensure safety of AI models.

“As such, the establishment and funding of these new AI Safety Institutes, and these international collaboration efforts, are an excellent public investment, not just for ensuring safety, but also for fostering the competitiveness of firms in the US and the UK.”

TechCrunch Minute: YC Demo Day’s biggest showcases

TechCrunch Minute: YC Demo Day’s biggest showcases Alex Wilhelm 13 hours

Well-known startup accelerator Y Combinator held one of its two yearly Demo Day events this week, showcasing hundreds of startups that recently went through its program. Judging from our coverage of the two-day event, TechCrunch found lots to like in the presenting companies. Though, if you are a bit tired of the AI chatter, you aren’t going to have too much fun looking at the rundown.

There was a lot more than just AI on display, so for today’s TechCrunch Minute I compiled a few trends and vibes from the shindig for your enjoyment. Certainly try to tune in live if you can, but if not, let us take you through the highlights and trends that were on display.

Accelerators play an important role in the startup world, giving founders early capital and advice as they get off the ground. Y Combinator competes with Techstars and other platforms globally. But with its history of backing some big successes, competition or not, we tune into YC’s events. Hit play and let’s talk about the latest round:

Significance of AI in agriculture

Significance of AI in agriculture

Artificial intelligence (AI) training datasets need to be prepared for agriculture to automate processes and enhance transparency through computer vision. Image annotation plays a vital role here by labeling images in a machine-readable manner by highlighting key features, and entities and offering different keywords. Image annotation assists in the automation and enhancement of tasks including fructification detection, livestock inventory, weed detection, soil monitoring, and crop health monitoring. It is also necessary for producing datasets that can be used by computer vision models in the real world. The annotation and tagging of photos with suitable labels and keywords lead to further categorization.

AI plays a key role in agriculture by enhancing output, limiting wastage and enhancing productivity. Data analysis obtained from different sources enables farmers to make data-driven decisions resulting in lesser resource utilization and environmental impact. AI has played a significant role by impacting harvesting, ripping, health monitoring, and increasing crop yield. It aids farmers in using their personal expertise to check crop yields, spot diseases and anticipate natural disasters.

Eight ways AI boosts efficiency and productivity in agriculture

1. Market demand analysis: This is a critical part of modern agriculture as it assists farmers in choosing the best crop to grow and sell. Using AI, the farmers can evaluate market demand. Machine learning algorithms are used for analyzing satellite images and weather data. These offer insights on the best times to plant and grow crops. And also which crops should be grown by farmers to maximize their profits.

2. Risk management: AI enables forecasting and predictive analysis to assist farmers in minimizing crop failure risk. It helps in analyzing fruits and vegetables and offers insights regarding quality, ripeness, and size. It also helps in detection of defects and diseases in crops for farmers to take preventive measures prior to the crops getting impacted.

3. Cross-breeding seeds: AI utilizes data on plant growth to produce crops that are less disease-prone and can better adapt to weather conditions. AI hastens the process of assisting scientists in identifying the best plant breed so that it can be crossbred to create better hybrid varieties.

4. Soil health monitoring: AI carries out an analysis of the soil to accurately estimate the missing nutrients as well as the overall status of the soil. It enables farmers to adjust their fertilizer and irrigation practices so that the crops grow optimally and limit their impact on the environment. AI also offers customized recommendations for managing soil and maintaining soil health in the longer term.

5. Crop protection: AI can be used for monitoring the health of plants with the aim of spotting and predicting diseases, identifying and removing weeds and recommending favorable pest treatment options. Computer vision and machine learning are used for analyzing high-resolution images of crops offering plant insights for identifying signs of stress or disease accurately.

6. Studying crop maturity: The use of AI-based hardware like sensors and image recognition tools enables the detection and tracking of crop changes by farmers. This further assists in obtaining accurate predictions regarding crops reaching optimum maturity. The use of AI for predicting crop maturity is far more accurate than human observation. This results in greater accuracy, major cost savings, and increased profits for farmers.

7. Intelligent spraying: AI-powered systems assist in automating weed detection or pest control. Computer vision is very precise as it results in 90 percent dip in pesticide usage. Data analytics is in turn used for estimating the quantity of pesticide required for each field based on the data regarding its history, soil status or crop type.

8. Chatbots: They act as a medium between the farmers and their respective customers or distributors. Farmers utilize them for answering questions relating to products or services offered by them, ordering supplies as well as checking inventory levels. Chatbots utilize natural language processing and machine learning algorithms for comprehending farmers’ questions and offering real-time insights about weather, market prices, etc.

Hence, agricultural systems need to be optimized for the success of the human society. Advanced technological solutions are required as traditional methods are becoming obsolete. Globally, there has been a tremendous impact of automation on industries. There is certainly a huge and transformational impact of digital technology on Agriculture as conveyed in the blog above.

Precision Prediction: AI Forecasting Crop Yields & Weathering Market Volatility

AI in agriculture

The world’s agricultural sector faces a dual challenge: the unpredictability of crop yields and the volatility of agricultural markets. These uncertainties pose significant obstacles to farmers, businesses, and consumers alike. However, amid these challenges, there lies an immense potential for AI-powered precision prediction to revolutionize how we approach agriculture. By harnessing the power of artificial intelligence, we can navigate through these uncertainties with greater accuracy and foresight.

The problem: Unpredictable yields and markets

The agricultural sector faces a major challenge: accurately predicting crop yields and market trends. This is crucial information for farmers and businesses to make informed decisions that can impact their success and the global food supply.

Limitations of traditional methods

Farmers and market analysts have traditionally relied on historical data and basic models to forecast yields and trends. However, these methods often fall short because of the following:

  • They fail to account for the many intricate factors that influence agricultural outcomes, like Unpredictable weather patterns, including droughts, floods, and extreme temperatures, which can significantly impact crop growth and yield.
  • Outbreaks of pests and diseases can devastate crops, leading to sudden and unexpected losses.
  • Global economic fluctuations, trade policies, and consumer behavior can influence market demand and prices for agricultural products.
  • Traditional methods rely solely on historical data, which may not capture the changing dynamics of the environment and markets.

Consequences of inaccurate predictions

Inaccurate forecasts can have serious consequences for both farmers and the broader agricultural system:

For farmers

  • Wasted resources: Farmers may invest in fertilizers, water, or other inputs based on inaccurate yield predictions, leading to financial losses.
  • Financial losses: Inaccurate market predictions can lead farmers to sell their crops at lower prices than expected, impacting their income.
  • Food Insecurity: In extreme cases, inaccurate yield forecasts can contribute to food insecurity, especially in regions already facing food shortages.

For markets

  • Price instability: Inaccurate forecasts can lead to sudden shifts in supply and demand, causing volatile price fluctuations in agricultural products.
  • Supply chain disruptions: Unforeseen changes in yield or market trends can disrupt supply chains, making it difficult to consistently deliver food products to consumers.
  • Negative impact on consumers: Ultimately, consumers can face higher food prices and potential shortages due to inaccurate forecasts in the agricultural sector.

According to a 2023 Food and Agriculture Organization report, approximately 14% of the global population experiences moderate or severe food insecurity.

The solution: AI-powered precision prediction

Farmers face a constant battle against unpredictable factors like weather, pests, and market fluctuations. These uncertainties can significantly impact crop yields and lead to financial losses. Traditional prediction methods, often relying solely on historical data, often need to catch up in capturing the complexities of the natural world and economic forces.

Fortunately, artificial intelligence (AI) advancements offer a promising solution: precision prediction. This technology utilizes cutting-edge algorithms, powerful computing, and predictive analytics to generate highly accurate forecasts.

What is precision prediction and how does it work?

Precision prediction uses powerful tools like artificial intelligence (AI) and machine learning (ML) in agriculture to create highly accurate forecasts. Imagine having a super-smart assistant that gathers a ton of information and uses complex calculations to make precise predictions.

Here’s how it works:

  • This “assistant” collects information from various sources, like past weather records, current weather forecasts, satellite images of crops, and even readings from sensors in the soil.
  • Using ML, the AI analyzes this massive amount of data like a slick detective uses forensic psychology principles to search for clues deftly. It identifies hidden patterns and relationships that humans might miss.
  • AI can make accurate predictions about future events, such as crop yields and market trends, based on the patterns discovered and real-time data.

How AI helps in accurate and dynamic forecasting

Imagine this: 1 out of every 6 farmers around the world has lost nearly 16% of their income in just the past two years. A recent 2023 Bayer Group report blames harsh weather conditions for this financial blow to farmers.

AI helps address this challenge by providing more accurate and adaptable forecasts than traditional methods. Here’s how:

  • Using a wider range of data sources, AI can create a more complete picture of the situation, leading to more reliable predictions.
  • Unlike traditional methods that rely solely on historical data, agriculture and food AI can factor in real-time information like current weather conditions or emerging diseases, allowing for adjustments to the forecasts as needed.

This combination of diverse data and real-time adaptation—bolstered by cutting-edge technology like IoT in agriculture—empowers AI to make dynamic and accurate predictions. These ultimately help farmers and businesses make informed decisions in a complex and ever-changing environment.

Case studies and applications

A. Real-world examples

AI is already making a real difference in the agricultural industry. Here are some specific examples:

Optimizing resource management for individual farmers: Australian farmers in the Murray-Darling Basin got a win with the COALA project, a Copernicus-based information service using satellite data to optimize irrigation.

Partnering with Rubicon Water, COALA’s cloud system with sensors helped farmers save 20% on water, reduce costs, and lessen environmental impact. This project shows promise for applying similar tech in agriculture globally.

Optimizing inventory management for agricultural businesses: In the United States, a large agricultural cooperative, Land O’Lakes, utilizes AI to analyze vast datasets, including weather patterns, crop yields, and historical market trends. This allows them to predict future commodity prices and generate agriculture demand outlooks with greater accuracy.

Land O’Lakes then leverages this information to optimize inventory management, ensuring they have the right amount of product available at the right time to meet market demands. This reduces potential losses and helps stabilize prices for farmers and consumers alike.

B. Benefits of AI-powered prediction

By incorporating AI into their practices, farmers and agricultural businesses can reap several benefits:

Increased accuracy and reliability: AI models can analyze vast amounts of data from various sources, leading to more accurate and reliable predictions than traditional methods.

Improved decision-making: With better forecasts, farmers can make informed decisions about planting, irrigation, and fertilizer application, leading to improved resource allocation and potentially higher yields.

Enhanced efficiency: AI-powered tools can automate and optimize tasks and provide real-time insights, allowing farmers and businesses to operate more efficiently and optimize their operations.

Limitations of AI-based models

While AI offers exciting possibilities, it’s important to understand its limitations. One challenge is the need for high-quality data. AI models are only as good as the information they’re trained on, and poor data can lead to inaccurate predictions.

Another concern is the potential for bias in AI algorithms. These biases can stem from the data used to train the models, leading to unfair or discriminatory outcomes. For instance, an AI system trained on historical market data might perpetuate inequalities between large and small-scale farmers.

Finally, it’s crucial to remember that AI is a tool, not a replacement for human expertise. Farmers and agricultural professionals must understand and interpret AI-generated insights, ensuring they align with their specific needs and agricultural knowledge.

Addressing concerns

Concerns surrounding data privacy, security, and ethical considerations are valid and must be addressed. Implementing robust data security measures and fostering open communication between developers, farmers, and policymakers is crucial to building trust and ensuring that AI is used responsibly in agriculture.

The future of AI in agriculture

image-4

(Source)

The potential of AI in agriculture is vast and exciting. Imagine robots working alongside farmers, meticulously planting seeds and applying just the right amount of water and fertilizer. This isn’t science fiction—it’s the future AI is helping to shape.

One area of development is the integration of AI with robotics. This allows for “precision farming,” where robots can perform tasks with incredible accuracy and efficiency. For example, AI-powered robots can analyze soil conditions and plant individual seeds at the optimal spacing, maximizing yields and reducing waste.

Another exciting development is the creation of AI-powered decision support systems. These systems can analyze real-time data from sensors and weather forecasts, providing farmers with crucial insights to make informed decisions. This could involve optimizing irrigation schedules, predicting potential pest outbreaks, and even suggesting the best crops to plant based on market conditions.

Ultimately, AI has the potential to revolutionize agriculture, making it more sustainable, efficient, and resilient. According to a recent World Economic Forum report, artificial intelligence could increase agricultural productivity by up to 70% by 2050. This can significantly contribute to feeding the world’s growing population and ensuring food security for future generations.

Conclusion

In conclusion, AI-powered precision prediction represents a paradigm shift in agriculture. By embracing these technologies responsibly, we can navigate uncertainties with greater clarity and confidence, ushering in a new era of productivity and sustainability.

As we embark on this journey towards a more AI-driven agricultural sector, we all must play a role in its advancement. I urge readers to delve deeper into the potential of AI in agriculture and advocate for continued research and development in this field. Together, we can harness the transformative power of AI to shape a more resilient and prosperous future for agriculture.