Can LNNs Replace Transformers?

Can LNNs Replace Transformers?

The human brain has approximately 86 billion neurons. And it’s quite a complex task to mimic the human brain by building neural networks such as RNNs, CNNs, or Transformers since scaling up the network to that level is not feasible. This also comes along with the problem of collecting huge amounts of labelled training data.

Researchers at MIT’s Computer Science and AI Laboratory (CSAIL) have come up with a new technique to solve these problems; it’s called liquid neural networks or LNNs. These are time-continuous RNNs that process data sequentially, keep the memory of past inputs, and adjust its behaviours based on new inputs.

Daniela Rus, the director of CSAIL, recently demonstrated the use of these LNNs in running a self-driving autonomous vehicle. She demonstrated that the researchers were able to build a fairly working self-driving car trained within the city using 100,000 artificial neurons. The problem, she pointed out, was that the cameras were picking up too much noise in its attention map, which is not required.

Leveraging just 19 artificial neurons through LNNs, the attention map got clearer and became more focused on the path it had to take. This was done by converting these neurons into decision trees, for letting the machine decide which path to take by itself, which is kind of a temporal neural network.

Dropping intelligence?

Reducing the number of neurons actually sounds like reducing the intelligence of these models. But Rus also demonstrated how this technique can be leveraged in various tasks such as embodied robotics as she claims that LNNs are causal — they understand cause and effect — something which has been touted as missing within Transformer-based neural networks.

While the human brain has billions of neurons, LNNs can achieve similar functionality with a significantly smaller number of artificial neurons. It is similar to Yann LeCun’s idea of achieving dog and cat-like intelligence before human-like intelligence. This compactness also offers advantages in terms of computational efficiency and scalability.

Researchers of LNNs highlight how by applying liquid neural networks to robotics, there is great potential for enhancing the reliability of autonomous navigation systems. This advancement would have numerous applications, such as improving search and rescue operations, facilitating wildlife monitoring efforts, and enhancing delivery services, among other possibilities.

As urban areas become increasingly crowded, the concept of smart mobility becomes crucial, and according to Ramin Hasani, the pioneer of LNNs, these offer a significant advantage due to their compact size while also bringing down the cost of training large models.

How does this work?

According to Hasani, the inspiration to build these systems came from a 1mm long worm called nematode C Elegans. The neural system of these worms consists of just 302 neurons. Hasani said that even with just less number of neurons, these worms are able to complete tasks that can have unexpectedly complex dynamics.

Hasani explains how typical machine-learning systems have neural networks that learn solely during the training phase, after which their parameters remain unchanged. However, LNNs possess the remarkable ability to continue learning even after their initial training. They achieve this by employing “liquid” algorithms that dynamically adapt to new information, much like the way neurons and synapses interact in living organisms’ brains.

For example, in an experiment performed on a drone, using these liquid networks, the drone was able to track the object better than other deep neural networks. This is because the neurons are less, it is able to stay focused on the task, and not depend on the context around.

A possible limitation with LNNs in self-driving vehicles is that since it is not accounting for all the surroundings and just focused on where it wants to go, it is difficult to avoid collisions with objects in the way. The research is being done on empty roads without any obstacles. This area, according to Hasani, is still being explored and improved.

LNNs for LLMs?

LNNs possibly open a whole new arena for research in AI applications. When it comes to LLMs, the field is built on Transformers and thus on the number of parameters. With liquid networks, the parameters change over time, based on the results of a nested set of differential equations, which essentially means it understands new tasks by itself, and thus does not require vast amounts of training.

LLMs, such as GPT-3 and GPT-4, have already demonstrated impressive language generation capabilities. However, they are limited by their static nature, where the parameters are fixed after the initial training. LNNs, on the other hand, possess the ability to continue learning and adapting to new information, similar to how neurons and synapses interact in living organisms’ brains.

By integrating LNNs into LLMs, these models can continuously update their parameters and adapt to evolving language patterns and contexts. This dynamic nature allows LLMs to improve their performance over time and respond more effectively to changing user needs and preferences. On the other hand, since models built on these architectures won’t have large amounts of information, it is highly unlikely to expect intelligent behaviour that we see hints of within current LLMs. More exploration of the network would definitely open up more avenues for LNNs.

Despite the challenges, researchers are striving to optimise LNNs by exploring ways to minimise the number of neurons required for specific tasks. Additionally, the limited literature on LNN implementation, application, and benefits makes understanding their potential and limitations challenging, unlike more widely recognised neural network architectures like CNNs, RNNs, or transformer models.

The post Can LNNs Replace Transformers? appeared first on Analytics India Magazine.

The smartest Eufy robot vacuum mop combo is $180 off on Prime Day

Eufy Clean X9 Pro CleanerBot

The Eufy Clean X9 Pro CleanerBot, a new 2-in-1 robot vacuum boasts a deep cleaning, hands-free mopping experience, coupled with 5,500pa of suction power. It also uses some AI navigation features to maneuver throughout your house.

Also: The best Amazon Prime Day 2023 deals: Live updates

ZDNET RECOMMENDS

Eufy Clean X9 Pro CleanerBot

This is the perfect robot vacuum and mop for homes with hard floors, even if there are carpets and rugs in between.

View at Us.eufy

Initially, I was less than enthusiastic about trying out yet another robot mop vacuum (I'd tested a similar one recently), but once I watched the Eufy X9 Pro work its way across my home floors, my mind was changed.

The CleanerBot truly lives up to the name, outperforming my old Roborock and the Yeedi MopStation Pro in vacuum and mop functions. The suction power, 5,500pa at maximum capacity, is outstanding. And the main brush is bristle-less, made of silicone wedges instead that are just as effective at cleaning floors.

In my limited experience (as I've only tested this model for about a week), the primary silicone brush makes it less likely for the X9 Pro to get tangled, as it's easier to scoop debris up than sweep it.

The mopping function on the Eufy X9 Pro CleanerBot is one of the two features that impressed me the most. The X9 Pro has two rotating mop pads — which I love in a robot vac/mop combo — which put 2.2lbs of downward pressure to break down tough stains, an instrumental feat for my home of children and pets.

Review: Roborock S8 Pro Ultra: This 2-in-1 vacuum can do just about everything

The other outstanding feature, and probably my favorite, is the use of AI for navigation, obstacle avoidance, and mapping. The CleanerBot has time-of-flight sensors and an AI camera system called AI See that helps detect and avoid objects so the vacuum doesn't suck up your kids' socks or stuffed animals.

It also uses iPath Laser Navigation to create maps of your home, separating the rooms by color in the Eufy Clean app and even showing you the obstacles the robot has found in each room. Reviewing the map after cleaning, you'll find things like power cords, shoes, and trash cans marked on the map.

Eufy is one of many to use this technology for obstacle avoidance and mapping, but it is a great feature. I hate having to pick up every last bit of paper my kids dropped before I can start cleaning — only to have the robot vacuum stuck on a power cord somewhere.

Also: This robot vacuum has a brilliant self-cleaning feature I didn't know I needed

The Eufy Clean app lets you customize settings for charging, cleaning intensity, voice, and more. And it also enables you to choose from the rooms that the robot automatically created on the map so you can send it to clean just that area, like a muddy entryway. You can choose to clean zones as small as 1.6 ft by 1.6 ft on the map in case of spills.

The Eufy Clean X9 Pro CleanerBot easily adjusts to uneven surfaces to cross up to 2cm barriers.

Beyond the AI See camera set, the CleanerBot has a sensor to detect floor types in case you're running the X9 Pro in vacuum and mopping mode, and it reaches a carpet or a rug. Once the robot detects a rug or carpet, it raises the mop pads to keep them off the mat and only vacuums on the soft surface.

Also: Best robot vacuums you can buy right now

It doesn't have a self-emptying dustbin, and the dustbin itself has to be emptied after each cleaning as it's pretty tiny. Still, the mopping feature and the suction power are impressive, especially as the mop can pick up stains and dirt that my Yeedi MopStation Pro left behind.

Here's another thing I was glad to see: The X9 returns dutifully to its station to wash the mop pads rather than wait until they're overdue for a cleaning. I don't want to see my robot mop dragging dry, dirty mopping pads minutes after it should've returned for a refresh, but I haven't found this to be a problem with the X9.

The Eufy Clean X9 Pro CleanerBot is available for sale at the discounted price of $720 for a limited time as part of Amazon Prime Day and is the perfect option for anyone looking for a robot vacuum and mop combination for a home with many hard floors, whether tile or hardwood, with some carpet or rugs mixed in.

Our top Prime Day deals

US-based Cilio to Acquire AutomationFactory.AI

US-based Cilio Technologies, LLC on Tuesday announced the acquisition of Noida-based, end-to-end digital transformation and product development firm AutomationFactory.AI. The new entity would be called Cilio Automation Factory (CAF), Cilio’s global engineering hub focused on innovation for the field service management space, the company said in the statement. Cilio Automation Factory will be headquartered in India, serving as Cilio’s engineering centre.

Cilio Technologies currently supports clients across the United States and has built and supported systems for companies spanning four continents. Cilio supports over 20,000 active installers, manufacturers, and distributors across the US market and has built and supported systems for companies like Caesarstone, IKEA, LG, Lowe’s Home Improvement, and others across four continents.

Cilio Technologies aims to leverage AutomationFactory.AI’s technology expertise to further strengthen its position in the SaaS-based field service management technology sector. “With our combined teams working on the industry-leading field services automation platform, we expect to make swift progress integrating Cilio with all the major home improvement retailers and manufacturers,” said Amit Bana, co-founder of Automation Factory.

CAF will maintain and enhance its current systems integration business and customer base, while actively seeking opportunities to develop tailor-made solutions for Cilio’s clients. This expansion includes offering services such as digital transformation consulting, project delivery, and talent augmentation. Moreover, CAF will take the lead in catering to the South Asian home goods installation industry, which boasts a market worth over $10 billion and is experiencing a steady annual growth rate of 10%.

The post US-based Cilio to Acquire AutomationFactory.AI appeared first on Analytics India Magazine.

Public Cloud Services Revenues Surpass $500 Billion in 2022: IDC

Public Cloud Services Revenues Surpass $500 Billion in 2022: IDC July 11, 2023 by Jaime Hampton

Worldwide revenue for the public cloud services market totaled $545.8 billion in 2022, an increase of 22.9% over 2021, according to new data from the IDC Worldwide Semiannual Public Cloud Services Tracker.

SaaS applications were the largest source of public cloud services revenue, accounting for over 45% of the total in 2022. Infrastructure as a service was the second largest revenue category at 21.2% of the total revenue, with Platform as a service and SaaS System Infrastructure software at 17.0% and 16.7% respectively.

IDC data also showed that the top five public cloud providers – Microsoft, Amazon Web Services, Salesforce Inc., Google, and Oracle – accounted for more than 41% of the worldwide total revenue for a 27.3% year-over-year growth. Microsoft had the largest market share of the public cloud services market with 16.8%, followed by Amazon Web Services with 13.5%.

“Given the economic challenges of the past year, it’s easy to conclude that we are in a period where a focus on constraining new expenditures and optimizing the use of existing cloud assets will dominate CIOs’ priorities and shape the fortunes of IT providers for the next several years. It’s also a very wrong conclusion,” said Rick Villars, group vice president, Worldwide Research at IDC. “The assessment and use of AI, triggered by generative AI, is starting to dominate the planning and long-term investment agendas of businesses and cloud providers will play a significant role in the evaluation and adoption of AI enablement services.”

Generative AI is Driving Investment in the Cloud

Though some budgets may be tightening due to lingering economic uncertainty, generative AI is driving long-term investment strategies for enterprises and cloud providers.

“Cloud providers are making significant investments in high-performance infrastructure,” said Dave McCarthy, research vice president, Cloud and Edge Infrastructure Services, IDC. "This serves two purposes. First, it unlocks the next wave of migration for enterprise applications that have previously remained on-premises. Second, it creates the foundation for new AI software that can be quickly deployed at scale. In both cases, these investments are resulting in market growth opportunities.”

Venture capital firm Andreessen Horowitz says that Generative AI warrants significant cloud investments due to its resource-intensive nature.

“Nearly everything in generative AI passes through a cloud-hosted GPU (or TPU) at some point. Whether for model providers/research labs running training workloads, hosting companies running inference/fine-tuning, or application companies doing some combination of both – FLOPS are the lifeblood of generative AI,” the company wrote. “For the first time in a very long time, progress on the most disruptive computing technology is massively compute bound.”

Andreessen Horowitz says that access to compute resources at the lowest total cost has become a determining factor for the success of AI companies. The venture capital firm expects the vast majority of startups to use cloud computing for generative AI, as it offers less up-front cost and more scalability in many cases.

Cloud Providers Seek to Keep Up with AI Demand

A recent Wall Street Journal article noted that traditional cloud infrastructure was not designed to support large-scale AI and cloud providers are rushing to catch up with the demand.

“There’s a pretty big imbalance between demand and supply at the moment,” said Chetan Kapoor, director of product management at Amazon Web Services’ Elastic Compute Cloud division in the WSJ article.

Only a small portion of cloud services are optimized for AI. A majority of the cloud consists of servers leveraging general-purpose CPUs, whereas GPU clusters better served for running AI workloads make up a minority of the infrastructure.

Kapoor told WSJ that AWS plans to deploy multiple AI-optimized server clusters over the next 12 months. The article also noted that Microsoft Azure and Google Cloud are also working to increase their AI infrastructure.

Hewlett Packard Enterprise is also entering the AI cloud market. The company recently announced it is introducing HPE GreenLake for Large Language Models, an on-demand, multi-tenant supercomputing cloud service that will enable enterprises to privately train, tune and deploy large-scale AI.

“Unlike general-purpose cloud offerings that run multiple workloads in parallel, HPE GreenLake for LLMs runs on an AI-native architecture uniquely designed to run a single large-scale AI training and simulation workload, and at full computing capacity,” the company said in a release. “The offering will support AI and HPC jobs on hundreds or thousands of CPUs or GPUs at once. This capability is significantly more effective, reliable, and efficient to train AI and create more accurate models, allowing enterprises to speed up their journey from POC to production to solve problems faster.”

HPE President and CEO Antonio Neri commented that we have reached a generational market shift in AI that will be as transformational as previous tech breakthroughs like the web, mobile, and cloud.

“HPE is making AI, once the domain of well-funded government labs and the global cloud giants, accessible to all by delivering a range of AI applications, starting with large language models, that run on HPE’s proven, sustainable supercomputers. Now, organizations can embrace AI to drive innovation, disrupt markets, and achieve breakthroughs with an on-demand cloud service that trains, tunes, and deploys models, at scale and responsibly," he said.

Related

Why Databricks Acquired MosaicML

Last month, data processing giant Databricks announced its definitive agreement to acquire MosaicML, a generative AI platform, in a transaction valued at approximately $1.3 billion.

The acquisition was aimed to make generative AI accessible to enterprises, allowing them to “build, own and secure best-in-class generative AI models while maintaining control of their data.” Databricks CEO, Ali Ghodsi, emphasised the goal of democratising AI and making the “Lakehouse the best place to build generative AI and LLMs.”

MosaicML is recognised for its state-of-the-art MPT large language models and provides enterprises with a way to quickly build and train their models cost-effectively using their data. MosaicML as a company is also lucrative because it claims to offer inexpensive services at par with open-source front runners like LLaMA and Falcon.

What MosaicML offers

Founded in 2016 by Naveen Rao, a Stanford electrical engineer and Hanlin Tang, a Harvard graduate, MosaicML had raised a total of $37M in funding over 2 rounds. Their latest funding was raised on Jan 1, 2023.

MosaicML offers various options for developers to utilise its platform, including an API for easy integration into front-end applications and customisation of models with their data.

Developers can also use MosaicML’s tools to pre-train custom models from scratch and serve them through the platform. The compatibility of MosaicML with third-party tools like LangChain allows developers to leverage those tools on top of their customised models, providing flexibility and ownership over the entire model.

While MosaicML, like its competitors, provides its technology for rent, it differentiates itself by also offering its code to customers. This allows customers to run the code on their own hardware, ensuring the confidentiality of their data from MosaicML. This approach appeals to corporate customers who prioritise data privacy, as the value of an AI system, is heavily dependent on the training data used.

The company emphasises the importance of open-source models, particularly in industries handling sensitive data, where local customisation and control over the model are crucial. MosaicML focuses on serving specific industries like healthcare and banking, providing the ability to interpret and summarise large amounts of data securely.

The company claims to make its technology accessible to all organisations at a significantly lower price, up to 15 times cheaper than its competitors. Notable customers such as AI2, Generally Intelligent, Hippocratic AI, Replit, and Scatter Labs leverage MosaicML for various generative AI use cases.

MosaicML’s MPT-30B LLM is a 30-billion parameter model that the company claims surpasses the quality of OpenAI’s GPT-3 despite having significantly fewer parameters, making it easier to run on local hardware and more cost-effective for deployment.

The attention mechanism used by MosaicML, called “FlashAttention,” offers faster inference and training, making it more efficient than Falcon and LLaMA. Additionally, MPT-30B is designed to fit the constraints of real hardware, optimising its performance on deep-learning GPUs.

Additionally, MosaicML claims that MPT-30B compares favourably to LLaMA and Falcon in terms of performance. The model requires less compute power for training while delivering similar results, especially excelling in coding tasks. However, the claims made by MosaicML are yet to be independently verified using Stanford’s HELM measure.

Despite some comparisons and criticisms, MosaicML ultimately sees open-source LLMs, including LLaMA and Falcon, as part of the same team. The company believes proprietary platforms like OpenAI pose real competition and emphasises the empowering nature of open-source LLMs, putting the power back into the hands of enterprise developers. MosaicML believes open LLMs are closing the gap with closed-source models and have reached a point where they are extremely useful, even if they haven’t completely surpassed them yet.

Databricks’ Motive

Databricks has positioned itself strongly in the market through several strategic moves. The introduction of LakehouseIQ, the acquisition of MosaicML, and the development of Unity Catalog have placed Databricks in a favourable position to maintain its market position and compete for incremental market share.

MosaicML’s platform will be integrated and scaled over time to provide a unified platform where customers can “build, own and secure their generative AI models. For Databricks, the acquisition of MosaicML is a strategic move aimed at providing enterprises with tools to easily and cost-effectively build their own large language models (LLMs) using their proprietary data.

By integrating this process into the broader Databricks toolchain and workflow, the company aims to reduce the costs associated with training and running LLMs. This strategy recognises the market demand for specialised LLMs that are more cost-effective and finely tuned for specific tasks. While general-purpose LLMs will continue to exist, Databricks sees an opportunity to cater to the need for tailored solutions. Both Snowflake and Databricks are actively working to provide enterprise-class governance and intellectual property protection as part of their specialised LLM offerings.

The Databricks Lakehouse Platform, combined with MosaicML’s technology, will provide customers with a “simple, fast way to retain control, security, and ownership over their valuable data without high costs.” MosaicML claims that its automatic optimisation of model training enables 2x-7x faster training compared to standard approaches.

The post Why Databricks Acquired MosaicML appeared first on Analytics India Magazine.

BigPanda Unveils Automated Incident Analysis Capability Powered by Generative AI

BigPanda Unveils Automated Incident Analysis Capability Powered by Generative AI July 11, 2023 by Jaime Hampton

An emerging AIOps use case for generative AI is incident analysis for IT response teams. Identifying the root cause of incidents and anomalies in IT systems can be time and resource intensive, and complex alerts often go unnoticed or uncommunicated to downstream staff and systems.

Generative AI for Automated Incident Analysis is a new capability from AIOps platform BigPanda. This new feature enables incident analysis and visibility for IT operations teams and can estimate incident impact, suggest likely root causes, and provide natural language incident titles and summaries, according to the company.

BigPanda says the new feature uses LLM technology to automatically deliver accurate and consistent incident analysis that is easy to understand while reducing mean time to identify and improving incident resolution speed.

BigPanda CEO Assaf Resnick said in a statement that generative AI has taken the company’s platform to a new level: “Customers that have used early versions of our Generative AI report it helps accelerate incident triage while reducing the number of tickets escalated to senior staff. This results in not just saved resources but makes their systems more reliable and helps drive their businesses forward.”

The new Generative AI feature combines AI with correlated and enriched alerts to deliver natural language summaries of combined alerts across multiple systems. BigPanda says the correlated alerts’ summary, title, and root cause analysis can automatically be added to ITSM tickets or chat collaboration channels with specific L2/L3 teams without manual intervention.

(Source: BigPanda)

In a blog post, the company likened its new Generative AI capability to an “experienced full stack IT expert who can work at lightning speed.” The company asserts it results in faster incident resolution, as well as fewer escalations and ITSM tickets. BigPanda also claims it will enable reduced reliance on skilled staff for incident analysis and standardized communication across stakeholders. During beta testing, the company found it saved up to seven minutes per ticket during an escalation and produced accurate causality 95% of the time.

“BigPanda’s AI-powered Generative AI empowers our ITOps teams by providing faster incident detection and independent resolution using generative AI,” said Jeremy Talley, lead operations engineer at Robert Half International. “The rapid, automated extraction of meaningful insights from our complex IT alert environment not only makes us better at L1 response but also reduces escalations to our L2 and L3 experts.”

BigPanda was recently named a strong performer in the Forrester Wave: Process-Centric AI for IT Operations, Q2 2023. The report analyzed 11 AIOps vendors and assessed performance across 30 criteria, and BigPanda tied for the second-highest score in the strategy category.

“BigPanda is a great option for organizations looking for a technology-agnostic solution that automates operational workflows across a wide assortment of technologies in the landscape,” the report stated.

There are concerns with data privacy and security of large language models. BigPanda says its deployment of Generative AI does not use any customer data to train its models and adheres to the company’s zero data retention policy. Additionally, the company says it uses a secure, enterprise-grade system with dedicated APIs to transact and retrieve answers, and customers are required to opt-in to use BigPanda Generative AI.

This article first appeared on sister site Datanami.

Related

Shutterstock expands deal with OpenAI to build generative AI tools

Shutterstock expands deal with OpenAI to build generative AI tools Kyle Wiggers 8 hours

Shutterstock today announced that it plans to expand its existing deal with OpenAI to provide the startup with training data for its AI models.

Over the next six years, OpenAI will license data from Shutterstock including images, videos and music as well as any associated metadata. Shutterstock, in turn, will gain “priority access” to OpenAI’s latest tech and new editing capabilities that’ll let Shutterstock customers transform images in Shutterstock’s stock content library.

Shutterstock says that, in addition, OpenAI will work with it to bring generative AI capabilities to mobile users through Giphy, the GIF library Shutterstock recently acquired from Meta.

“The renewal and significant expansion of our strategic partnership with OpenAI reinforces Shutterstock’s commitment to driving AI tech innovation and positions us as the data and distribution partner of choice for industry leaders in generative AI,” Shutterstock CEO Paul Hennessy said in a press release.

Stock content galleries like Shutterstock and generative AI startups have an uneasy — and sometimes testy — relationship. Generative AI, particularly generative art AI, poses an existential threat to stock galleries, given its ability to create highly customizable stock images on the fly.

Contributors to stock image galleries, meanwhile, including artists and photographers, have protested against generative AI startups for what they see as attempts to profit off their work without providing credit or compensation.

Early this year, Getty Images sued Stability AI, the creators of the AI art tool Stable Diffusion, for scraping its content. The company accused Stability AI of unlawfully copying and processing millions of Getty Images submissions protected by copyright to train its software.

In a separate suit, a trio of artists are alleging that Stability AI and Midjourney, an AI art creation platform, are violating copyright law by training on their work from the web without their permission.

Some experts suggest that training models using public images, even copyrighted ones, will be covered by fair use doctrine in the U.S. But it’s a matter that’s unlikely to be settled anytime soon.

In contrast to Getty Images, Shutterstock — perhaps unwilling to hinge profits on a lengthy court battle — has embraced generative AI, partnering with OpenAI to roll out an image creator powered by OpenAI’s DALL-E 2. (The Shutterstock-OpenAI deal dates back to 2021, but the image creator didn’t launch until late 2022.) Beyond OpenAI, Shutterstock has established licensing agreements with Nvidia, Meta, LG and others to develop generative AI models and tools across 3D models, images and text.

In an attempt to placate the artists on its platform, Shutterstock also maintains a “contributor fund” that pays artists for the role their work has played in training Shutterstock’s generative AI and ongoing royalties tied to licensing for newly-generated assets.

How Big Tech Companies are Tackling Climate Change

An increasing number of tech companies are making their mark on the climate sector, leveraging the transformative power of advanced AI/ML models. Recently, Huawei announced its latest AI model Pangu Weather. The model boasts a higher precision when compared with traditional numerical weather forecast methods.

The model utilises deep learning techniques along with 43 years of historical data with a prediction speed considered 10,000 times faster than traditional methods. According to China Meteorological Administration, Pangu Weather had successfully forecasted the path of the recent Typhoon Mawar with the precision of five days prior to its alteration in the waters near Taiwan’s Eastern islands.

Google Research developed a deep learning model MetNet-2 that focuses on utlising AI for weather and climate-related issues. The model can predict precipitation with remarkable precision, providing forecasts at a spatial resolution as fine as 1km and a time resolution of 2 minutes for a duration of up to 12 hours.

Microsoft and DeepMind have also built their AI models with ClimaX and Graphcast, respectively.

Meanwhile, NVIDIA is working on the Earth 2 model in an effort to collaborate with climate researchers and policymakers. Through the utilisation of Modulus and FourCastNet (ML model that imitates the dynamics of global weather patterns predicting extremes with unprecedented speed and accuracy), NVIDIA achieved significant improvements in the weather trajectory generation. They were able to produce 21-day weather trajectories for 1,000 ensemble members in a fraction of the time it previously took for a single trajectory.

IBM has also been in the climate space for a while now. In 2016, IBM acquired The Weather Company, which was a subsidiary of the Weather Channel. With IBM’s Watson framework, the model uses AI to combine information from over 100 weather forecast models worldwide.

Startup Zeus AI, started by former NASA scientists, leverages vast volumes of data from government satellites, including information about atmospheric winds, water vapours, temperature changes, and cloud cover that impact weather affect global weather patterns. Some of the other companies solving weather prediction problems include Tomorrow.io, Atmo.io, Jua.ai and Zeus.ai.

Limitations

Implementing ML models for weather prediction comes with its limitations. Limitation in training the data is where the problem lies. Rare and extreme weather conditions poses a challenge in training and testing. Furthermore, data availability is another problem. In NWP models, where satellite data is used, the missing values are interpolated. However, by using such interpolated data for AI models, the phenomenon of concept drift and built-in biases can arise.

According to a paper on machine learning for weather and climate modelling, the authors suggest that neural networks may require explicit instruction on relationships between certain variables. Short-to-medium datasets are insufficient to enable a model to understand long-term variations such as El Nino or any form of climate change.

The New-Age AI Forecasting

Weather forecasters primarily rely on numerical weather prediction (NWP) models. These models use data from weather stations, weather balloons, and satellites to understand the current state of the atmosphere. By solving equations related to air movement, these models can predict most weather patterns accurately. However, through this method, smaller weather events such as localised thunderstorms, or predicting which side of a town will experience heavy rain during a thunderstorm is challenging to predict. The method also involves expensive computational power, and that is where AI models have an advantage.

Forecasting methods using ML models can analyze a large dataset of past weather maps to understand typical weather patterns, as opposed to traditional NWP models thats solve complex physical equations to arrive at patterns. By training on historical data, AI models will then use current weather information to make future predictions. However, this method also lacks the ability to forecast localised weather conditions.

The machine learning systems use significantly less computing power when compared to traditional prediction methods. NVIDIA’s Earth-2 model reduces energy consumption by approx 1000 times.

India Meterological Department(IMD)has started using AI as an experimental phase to improve nowcast and short-range weather forecasting which range from three hours to seven days.

AI models can provide accurate and tailored forecasts for specific user needs by looking at fine-grained details in the data that traditional methods may overlook, however, with the challenges that exist, AI models will most likely not be able to replace traditional methods as of now.

The post How Big Tech Companies are Tackling Climate Change appeared first on Analytics India Magazine.

Building AI Products with OpenAI: A Free Course from CoRise

Sponsored Post

Building AI Products with OpenAI: A Free Course from CoRise
CoRise, in collaboration with OpenAI, presents a FREE and comprehensive course on building AI products. Unlock your potential by learning to leverage the power of OpenAI API and ChatGPT. Dive into the world of AI as you define problems, iterate on prototypes, and transform your ideas into fully functional services in just one week.

More On This Topic

  • Building a Recommender System for Amazon Products with Python
  • Next Level AI Programming: Prompt Design & Building AI Products
  • Free ChatGPT Course: Use The OpenAI API to Code 5 Projects
  • Behind OpenAI Codex: 5 Fascinating Challenges About Building Codex You…
  • 7 Free Platforms for Building a Strong Data Science Portfolio
  • Introducing OpenChat: The Free & Simple Platform for Building Custom…

Upwork Launches New Generative AI Tools and Services Hub

Laptop computer displaying logo of Upwork.
Image: Adobe Stock

Upwork, a global employment marketplace, is launching a hub dedicated to connecting companies seeking workers with technical skills specific to artificial intelligence with independent professionals who have such expertise. The AI Services hub is also designed to host a suite of emergent generative AI tools from OpenAI and AI content generator Jasper, according to Upwork.

Jump to:

  • Upwork offers free trial of Jasper generative AI
  • Increased demand for AI jobs and skills
  • The key to new Upwork AI hub is interactivity
  • OpenAI powers some features in Upwork’s new hub
  • Growing demand for prompt engineers, creators

Upwork offers free trial of Jasper generative AI

According to Dave Bottoms, general manager and vice president of products for Upwork Marketplace, the company is launching a 30-day free trial of Jasper generative AI services for copywriting, marketing and image creation available to all talent on Upwork, so they can increase their productivity, communicate more effectively with clients and elevate the quality of their work.

“These updates are technically found outside the AI Services hub, in other areas of the product flow,” he said.

Increased demand for AI jobs and skills

Jobs in generative AI reportedly jumped 20% in the U.S. in May 2023. According to online training and education site Coursera, the top jobs in AI include positions for data scientists and for engineers specializing in AI, machine learning, data, robotics and software.

Upwork sees rapid adoption of generative AI among pros and businesses

The company noted an over 450% growth in weekly job posts related to generative AI compared to this time last year.

“AI was the fastest-growing category on our platform in the first half of 2023, as measured by total number of individuals hired,” said Bottoms. He said that generative AI job posts on Upwork (Figure A) are up more than 1,000% and related searches are up more than 1,500% when comparing the second quarter of 2023 to the fourth quarter of 2022 .

This is a chart showing that engineers and data scientists are among high demand AI positions. Image: Upwork
Engineers, data scientists, among high demand AI positions. Image: Upwork

Upwork’s recent research indicates that demand for AI skills is accelerating, with 64% of C-suite executives saying they will hire more professionals of all types due to generative AI, according to Bottoms.

“Businesses are seeking out independent professionals on Upwork with expertise in AI tools like ChatGPT, DALL-E, Midjourney, Stable Diffusion, Jasper and more.” He said such job posts are up 230% in the second quarter of 2023 versus the fourth quarter of 2022.

SEE: Should you use generative AI for hiring? Read this TechRepublic article to find out.

The key to new Upwork AI hub is interactivity

The new AI Services hub is the company’s first foray into helping Upwork clients find and hire the best AI talent.

“It is an interactive experience that helps businesses quickly connect with independent professionals with AI expertise and find tips for integrating AI into their business,” he said. “For example, once there, clients can book a 1:1 consultation to get advice from an AI expert, hire a professional with experience in common AI use cases, check out the new AI Services resource center and more.”

Among tools that the AI Services hub includes are:

  • Guides for integrating generative AI into business.
  • Research on the adoption of generative AI in the workplace.
  • Access to the AI and Machine Learning Upwork Community Group

OpenAI powers some features in Upwork’s new hub

The company said OpenAI will power some new features in beta for job seekers and employers, including:

  • A large language model job post generator designed to help recruiters post customizable job post drafts.
  • A proposal tips tool for independent professionals on Upwork to improve their proposals for specific job posts by providing personalized tips on how to best showcase skills that meet the demands of hiring clients.
  • Upwork Chat for businesses working on the platform; this is designed to help on tasks such as hiring and talent search (Figure B).
This image shows the AI natural language interface of Upwork Chat. Image: Upwork
Upwork Chat lets users use an AI natural language interface. Image: Upwork.

Growing demand for prompt engineers, creators, AI chatbot pros and more

“A few roles that we’ve seen gain steam are prompt engineers, AI content creators, machine learning and deep learning engineers, data scientists, AI chatbot developers, and professionals with model tuning and AI model integration expertise,” Bottoms said. “As we look at work being sought across a breadth of marketplace categories we support, we see talent using AI tools to augment their workflows in nearly every category.”

He added that Upwork is aiming to be the pre-eminent destination for clients seeking AI-related talent and work. “Over the past year alone, more than 10,000 contracts have involved AI work, and more than 250 AI skills are represented by talent on Upwork.”

woman working with data on laptop

Subscribe to the Data Insider Newsletter

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Delivered Mondays and Thursdays Sign up today