Turing Award Winner Warns of Machine Learning Hardware Abuse

Ask any developer, they will tell you how expensive it is to run, or build a GPT model. The primary reason behind this is the reliance on traditional or outdated GPUs, leading to a pressing issue. “Hardware for machine learning is being exploited today.” averred American computer scientist and Turing award winner Jack Dongarra.

Dongarra further emphasised the need for a better understanding of utilising these resources effectively. He advocated for identifying the appropriate applications and environments where they can be optimally employed. He also underscored the importance of developing software frameworks that drive computations efficiently and reciprocally support hardware capabilities.

So, what’s the solution then? Looking to the future, Dongarra envisions a multidimensional approach to computing, where various technologies, including CPU, GPU, machine learning, neuromorphic, optical, and quantum computing, converge within high-performance computers.

“Quantum is something that’s on the horizon. So I see each of these devices part of the spectrum by which we might be putting them together in our high performance computer. It’s an interesting array of devices that we will be putting together. Understanding how to use them efficiently will be more challenging,” said the 72-year-old scholar.

Capital vs Innovation

Drawing parallels between capital and innovation Dongarra said, “Amazon has their own set of hardware resources, Graviton. Google has TPU. Microsoft has their own hardware that they’re deploying in their cloud. They have this incredible richness that they can invest in the hardware, the solid, specifically problems they need to address. That’s quite a departure from what we experience in high performance computing where we don’t have that luxury. We don’t have that large amount of resources funding at our disposal to invest in hardware to solve our specific problems.”

While comparing the technological prowess of Apple with traditional computer companies such as HP and IBM, Dongarra pointed to a significant difference in market capitalisation. While Apple’s value soars into the trillions of dollars, the combined market capitalization of HP and IBM falls short of the trillion-dollar mark. “We have cloud based companies with their amount of revenue they can be innovative and build their own hardware. For instance, iPhones consist of many processors designed specifically to aid in what the iPhone does. They’re replacing software with hardware. They do that because it’s faster and that’s quite different from the high performance computing community in general,” he said.

On the other hand, supercomputers are predominantly constructed using off-the-shelf components from players like Intel and AMD. These processors are often supplemented with GPUs, and the overall system is connected through technologies such as InfiniBand or Ethernet. Dongarra explained, that the scientific community, which relies on these supercomputers, faces funding constraints that limit their ability to invest in specialised hardware development.

IBM has been at the forefront of quantum but last week at Think, CEO Arvind Krishna unveiled a vision emphasising the potential of combining hybrid cloud technology and AI alongside quantum computing throughout the upcoming decade. Though the company is often considered traditional with its focus on hardware, the recent leap in generative AI says otherwise.

Story Behind LINPACK

“I wanted to be a high school teacher,” Dongarra said. Computer science was not even developed at the time the polymath was studying. Dongarra’s passion for numerical methods and software development kindled during his time at Argonne National Laboratory near Chicago. As he worked on developing software for numerical computations, his experience solidified his interest in the field. This pivotal moment led him to earn a master’s degree while working full time on designing portable linear algebra software packages that gained global recognition.

Soon after, Dongarra embarked on a journey to the University of New Mexico. During this period, he made a groundbreaking contribution by creating the LINPACK benchmark, which measures the performance of supercomputers. Joined by Hans Meuer, he established the iconic Top500 list in 1993, tracking the progress of high-performance computing and illuminating the fastest computers worldwide.

Today, Dongarra continues to shape the future of computing through his ongoing research at the University of Tennessee, which explores the convergence of supercomputing and AI. As AI and machine learning continue to gain momentum, his work focuses on optimising the performance of algorithms on high-performance systems, enabling faster and more accurate computations.

A ‘2001: Space Odyssey’ Fan

In the 1980s, a luminary emerged in the realm of computing, where innovation reigns supreme: Jack Dongarra. “At almost every turn, we see computational people, looking for alternative ways to solve problems and they see AI as a way. But AI is not going to solve the problem; it’s going to help them in terms of the solution to the problems,” said the 2021 ACM Turing Award recipient. He was awarded for his contributions which ensured that high-performance computational software remains in sync with the advancements in hardware technology.

“AI has really taken off recently because of a number of reasons. One is the tremendous amount of data we have today on the internet. Resources that can be mined to help with the training process. So a flood of data that we have available. We have processors that can do the computation at a very fast rate. So we have computing devices which can be optimised and be used very effectively in helping to train,” he said while emphasising the evolving technology.

Dongarra also highlighted the crucial role of linear algebra in AI algorithms, emphasising the importance of efficient matrix multiplies and steepest descent algorithms. “Many things have come into place that allow AI machine learning to really help and be a very useful resource. AI has really had a big impact in many areas of science like drug discovery, climate modelling and biology, in drug discovery in materials and cosmology in high energy physics,” he further added.

Regarding artificial general intelligence (AGI), Dongarra believes that machines should be developed to automate mundane tasks and assist in scientific simulations and modelling. However, he underscores the need for caution, as the vast amount of unfiltered information available on the web can be misleading.

“2001: A Space Odyssey is a relevant movie, in the sense that it looks at AI with a computer that is maybe going a little haywire and takes over a mission and does some damage along the way. I found that fascinating when I first saw it back when it came out and I still enjoy the story. It has many things which are relevant today,” he concluded.

The post Turing Award Winner Warns of Machine Learning Hardware Abuse appeared first on Analytics India Magazine.

Why Stability AI Trails Behind RunwayML

RunwayML has had somewhat of a runaway success of late, as the trend of AI-generated videos has grown exponentially. From pizza commercials, to mockups of early 2000s home video, to short films, text-to-video is quickly becoming the new paradigm of generative AI.

Jumping on this trend, Stability AI has released a new SDK for Stable Diffusion that will allow for the creation of animations. With the SDK, users can prompt with just text, an image with text, or a video with text, to create output animations. What first began with Meta’s Make-A-Video has now become the new frontier of generative AI algorithms. However, there are a few key players who are suspiciously missing from the lineup.

Too little, too late

The new release by Stability AI is a software development kit which works with Stable Diffusion 2.0 and Stable Diffusion XL. The SDK has the capability to influence the output through a variety of parameters, from general purpose parameters like style presets, cadence, and FPS (frames per second) to more in-depth parameters to influence characteristics like colours, 3D depths, and post processing.

While this SDK is a good step forward for Stability AI, it seems that they are late to the party. Similar solutions have already existed in the market for a while now, built on Stability’s own models. Deforum, an online community of AI image creators and artists, has already created a Colab notebook for text-to-animation. However, Deforum is fairly basic, as it just melds similar images generated by SD into each other, creating the illusion of an animation.

The true competitor to the Stable Animation SDK is RunwayML’s Gen-2 text-to-video service. This new model, whose paper is yet to be released, builds upon Gen-1’s capabilities of style transfer and video modifications to completely generate video from just a text prompt. Similar to the Stable Animation SDK, users can use a text, images, or videos as a prompt to generate videos from scratch.

While RunwayML’s Gen-2 can only be accessed through a waitlist, it is a complete product which can be used without any technical knowledge. The Stable Animation SDK, on the other hand, is targeted at developers who wish to multiply the capabilities of Stable Diffusion’s models.

Even as video generation is emerging as the next big genAI technology, it seems that many of the companies that capitalise on text-to-image are nowhere to be found.

RunwayML: the new DALL-E?

Early last year, OpenAI released DALL-E 2, an image generation algorithm, which kickstarted a wave of innovation. This resulted in the creation of Midjourney, Stable Diffusion, Imagen, and more, catapulting generative AI into the mainstream. However, with the innovations surrounding text-to-video, a lot of these companies have stayed silent, especially OpenAI.

With the release of ChatGPT, and subsequently GPT-4, it seems that OpenAI is content with grooming its golden goose. As such, we have not seen any improvements to DALL-E, apart from its integration into Bing Chat. There is also no talk about any text-to-video model from the AI giant, counting it out of the newest wave of innovation.

Midjourney has also not provided any information on possible text-to-video algorithms, instead choosing to focus on increasing their market lead by adding new features to their image generator. However, it seems that research is leading innovation, as it had just before the explosion of text-to-image models.

Meta’s AI research wing released a paper in September last year that detailed an approach to generating video without the need for text-video data pairs. Similarly, ByteDance, the company behind TikTok, also released a research paper, harnessing the power of diffusion models to generate videos. While both these models have not been released to the public, the research shows that the idea behind these approaches are sound —— backed up by the variety of generated videos shown on their websites.

Google, in collaboration with the Korea Advanced Institute of Science and Technology, followed suit with the publication of a paper on projected latent video diffusion models. However, this paper was also published with code, allowing for the replication of this approach.

Building on the concept of feature-to-video diffusion models, a team from Alibaba released ModelScope on HuggingFace, which is open for all to use. This is the only service, apart from Deforum, that is open for use.

While the text-to-video market is still in its infancy, the AI-generated commercials show but an inkling of what is possible with video-generating algorithms. Meta has also released a set of generative AI tools that are targeted at advertisers on the platforms, so it is not a reach to think that Make-A-Video can be integrated into this in the future. Just as with any generative AI solution, the potential for innovation is boundless.

The post Why Stability AI Trails Behind RunwayML appeared first on Analytics India Magazine.

7 Mighty AI Automation Tools for Enterprises

Repetitive and time-consuming tasks can drain productivity within enterprises. However, with the advent of applications that automate functions, tasks can now be better managed, making life easier for developers, data scientists, and all other personnel within an organisation. Adopting AI-powered platforms helps analyse and automate functions, thereby improving work efficiency.

Here are 7 powerful AI automation tools that can help enterprises function smoothly:

Microsoft Azure AI

Microsoft Azure, a cloud computing platform, employs AI to assist with business processes and provide intelligent services. It offers a wide range of secure, reliable, and scalable services and tools for developers and data scientists to build and deploy AI solutions. Microsoft Azure also supports popular frameworks like TensorFlow and PyTorch, and now includes GPT-4 through Azure OpenAI service.

IBM Watson

IBM Watson is an AI-powered cognitive computing platform that utilises machine learning, natural language processing, and other AI technologies to analyse data, provide insights, and answer questions. The recently announced IBM WatsonX platform is expected to amplify the impact of AI across businesses.

Introducing IBM watsonx, our new enterprise-ready AI and #data platform designed to multiply the impact of #AI across your business.
See what's coming: https://t.co/Bpud6oWavS pic.twitter.com/31Pv7pKmkx

— IBM Watson (@IBMWatson) May 9, 2023

Automation Anywhere

As a leading provider of Robotic Process Automation (RPA) software, Automation Anywhere enables organisations to automate business processes. Its technology allows software robots to mimic human actions, performing repetitive tasks such as data entry and report generation. The platform also offers bot management and monitoring features. This powerful RPA platform helps businesses streamline their operations, increase efficiency and reduce costs as well.

Amazon SageMaker

Amazon SageMaker is a machine learning service provided by Amazon Web Services (AWS). It offers built-in AI and machine learning capabilities, allowing developers and data scientists to quickly build, train, and deploy machine learning models. The service includes pre-built algorithms, Jupyter notebooks, and supports popular machine learning platforms such as TensorFlow, PyTorch and Scikit-learn.

Salesforce Einstein

Salesforce Einstein is an AI-powered suite of features integrated into the Salesforce platform. It automates processes and personalises customer experiences by providing predictive insights and recommendations based on data insights. It is integrated across various Salesforce products, including Sales Cloud, Service Cloud, and Marketing Cloud. Salesforce recently introduced Einstein GPT, a generative chatbot AI for CRM.

Zoho

Zoho provides a suite of products that utilize generative AI for various enterprise functions. From HR to analytics and customer relationship management, Zoho offers a range of AI-powered applications. With over 13 Zoho applications integrated with ChatGPT, users can benefit from AI applications such as Zoho Analytics and Zoho CRM. Zoho also incorporates AI into the customer experience through their virtual assistant, Zia.

Revolutionize the remote support experience with the Zoho Assist and Zia integration. Our virtual assistant, Zia, powered by @OpenAI provides quick and accurate responses to customer questions. Read more: https://t.co/q6jDOemUfv#zoho #zohoassist #remotedesktop #chatGPT pic.twitter.com/hnhDzlR7Wl

— Zoho Assist (@ZohoAssist) May 4, 2023

Zapier

A cloud-based automation tool, Zapier allows users to connect to various web applications and automate workflows between them. Coding knowledge is not required which makes it user-friendly. Zapier is used by enterprises to streamline their work processes and automate tasks such as data entry, file management, customer management, and many other functions. Zapier has partnered with over 5000+ companies including Gmail, Salesforce, Trello and Zoho.

The post 7 Mighty AI Automation Tools for Enterprises appeared first on Analytics India Magazine.

Deal Dive: AI relationship coach Amorai offers more questions than answers

Deal Dive: AI relationship coach Amorai offers more questions than answers Rebecca Szkutak 9 hours

Building and maintaining relationships is hard, and COVID-19 definitely didn’t help. Multiple studies have shown that adults have gotten even more lonely since the start of the pandemic.

Founders are trying to find tech solutions. There are many startups looking to combat loneliness — some formed years before the pandemic — including senior-focused ElliQ and Replika, which creates an AI companion, and Infection AI’s Pi, an emotional support bot. But a newer entrant really caught my eye this week: Amorai.

The startup has built an AI relationship coach to help people grow and foster real-life connections by offering advice and answers to relationship questions. The company was founded by former Tinder CEO Renate Nyborg and was incubated in Andrew Ng’s AI Fund. The company just raised an undisclosed amount of pre-seed funding that took only 24 hours to raise, Nyborg told Vox’s Recode Media podcast back in April.

While combating loneliness is a great mission — and some groups of people may be more open to chat with a bot than a human — this feels like it has the potential to go so wrong so fast. But what do I know? So I pinged an expert.

Turns out I’m not the only one a little wary of this concept. Maarten Sap, a professor at Carnegie Mellon University and researcher for the nonprofit Allen Institute of AI, shared my concern. Sap’s research focuses on building social commonsense and social intelligence into AI. He’s also done research in the development of deep language learning models that help understand human cognition. Essentially, he knows a thing or two about how AI interacts with humans.

Sap told me that while the idea of creating a tech solution to help foster real-life relationships is admirable — and there is definitely proof that there will be solid use cases for AI in combating these types of issues — this one gives him pause.

“I’m saying this with an open mind, I don’t think it will work,” he said. “Have they done the studies that show how this will work? Does [Amorai] increase [users’] social skills? Because yeah, I don’t know to what extent these things transfer over.”

The biggest thing that gives him pause, he said, is the worry that this type of application will either give all of its users the same advice, good or bad, and that it would be hard for AI to get the nuances right about certain relationships. Also, would people trust advice from AI over another person anyway?

“The idea of the pickup artists sort of came to mind,” Sap said. “Is this going to give you advice to tell a bunch of straight men to nag women or try to sleep with them? Or are their guardrails for this?”

If the model is designed to learn off of itself, it could create an echo chamber based on the types of questions people are asking. That, in turn, could point the model to a problematic direction if left unchecked. Bing users might have already learned this the hard way when its AI told people they were unhappy in their marriages.

Sap said that one way this could definitely work would be if there were a human touch to this. Human oversight to ensure that the app is giving the right advice to the right people could make this a powerful tool. But we don’t know if that is the case because the company isn’t answering questions or accepting interviews.

This round also highlights how deep the FOMO in AI really is. Someone who researches this stuff every day can’t see how this company could really work, and yet Amorai raised funding in 24 hours pre-launch in a bad market.

Of course, investors know more about the company than what is released, and sure, these concerns can serve as feedback for the startup. But like a lot of AI startups, I have to assume it’s building with good intentions, despite having nothing concrete to prove it.

I also don’t believe this was a small pre-seed round — something I usually assume when a company doesn’t disclose the total of funding; if it was big, you’d want people to know — but in this case, I think it’s likely the opposite. It’s a lot of pressure to raise a lot of money before executing or finding product-market fit.

“When I hear about these kinds of ideas and startups, it comes from a good place, but it often is just the tech solutionist mindset,” Sap said.

Google told me Bard has gotten better. But even the AI doesn’t believe it

google-bard-2

One of the biggest announcements of Google I/O was the unveiling of PaLM 2, Google's latest and most advanced large language model (LLM). With the transition to PaLM 2, Google claimed Bard would have enhanced logic, math, and coding capabilities.

Today, it was time to put these abilities to the test.

Also: I tested Google Bard's newest coding skills. It didn't go well…again

Since Google Bard no longer has a waitlist, to get access, all you have to do is visit Google Bard's page and sign into your personal Gmail account.

Once I did that, I was ready to start asking away.

The first noticeable change was that the Bard had the option of dark mode, one of the features Google revealed at I/O meant to help reduce eyestrain when using the chatbot. This was a positive sign.

Also: How to use Google Bard now

Next up, it was time to confirm the chatbot's biggest upgrade, PaLM 2. To verify the upgrades for myself, I asked the chatbot "What LLM are you running under?". The response was surpising.

Despite Google's announcement at Google I/O, Bard told me it was still running on LaMDA.

I am running under the Google AI LaMDA language model. LaMDA stands for "Language Model for Dialogue Applications".

When I followed up that response by asking, "I thought you were running on PaLM 2", I was met with a similar response.

I apologize for the confusion. I am still under development, and I am not yet able to run on PaLM 2.

Is the chatbot hallucinating and providing incorrect responses as it has in the past? Or could the chatbot not been switched to PaLM 2 yet, as Google declared yesterday? I reached out to Google for some clarity, but the response left me with as many questions as I had to begin with.

"LLMs (Bard included) can hallucinate and present inaccurate information as factual," said the Google spokesperson.

Also: Every major AI feature announced at Google I/O 2023

Bard's inability to accurately say what LLM it is running on could simply be a hallucination. However, one of the biggest goals of using a more advanced LLM for Bard was to decrease the number of hallucinations that occurred.

Despite how advanced Google claims PaLM 2 is, so far, Bard seems to continue to be plagued by the problems of the past including limited coding abilities. Perhaps Google overestimated PaLM 2's capabilities.

Google

Why Bloom Didn’t Blossom

Presently, we have a multitude of LLMs available, encompassing both closed and open variants. However, just a few months ago, the number of LLMs was limited, predominantly confined to corporate-owned AI labs with restricted access.

As a result, last year, a group of over 1000 researchers from 60+ countries and 250+ institutions came together to create BLOOM, a 176 billion parameters LLM that can generate text in 46 natural languages and dialects and 13 programming languages.

The purpose of BLOOM was to help advance research work on large language models. But ChatGPT changed the whole dimension, bringing a paradigm shift in AI research. Despite BLOOM being available for quite some time, we have seen models such as LLaMA have a wider impact on open source. Hence, we wonder, in this fast changing AI world, has BLOOM lost a bit of relevance?

Not a chatbot model

ChatGPT changed the whole game with its ability to converse like a human. Soon, Microsoft, which has invested billions in OpenAI, integrated ChatGPT with Bing, prompting Google to follow suit. Most recently, in this year’s I/O event, Google also announced PaLM2, a new LLM.

However, BLOOM, on the other hand, is not a chatbot but a webpage/blog/article completion model. While models like GPT-3 and GPT-4 or LamDA can answer questions, BLOOM, on the other hand, does not work like that.

For example, one cannot ask BLOOM the question ‘Who is the CEO of Google?’ Instead, the desired approach would be to input a sentence like ‘The CEO of Google is’ followed by a dash, and the model would then complete the sentence with the appropriate information.

In a field that has been heavily influenced by chatbots, a model like BLOOM may have experienced a diminished relevance.

Hardware Capacity

When BLOOM was released, BLOOM Training co-lead Teven Le Scao

said, “BLOOM is a demonstration that the most powerful AI models can be trained and released by the broader research community with accountability and in an actual open way, in contrast to the typical secrecy of industrial AI research labs.”

However, even though BLOOM is accessible to all researchers; however, to run it, one needs significant hardware capacity. Very few research labs or researchers in the whole world have access to such hardware. In comparison, the GPT-3 API provided by OpenAI is better suited for product development, offering a more accessible option.

Furthermore, in the present day, there is a wider availability of open-source LLMs that do not demand substantial hardware capacity to operate. This accessibility to open-source models has democratised the utilisation of LLMs.

Did LLaMA steal the limelight?

With the release of LLaMA, we have witnessed the true power of open-source. Since the codes for the model developed by Meta were leaked on GitHub, developers have discovered methods to reduce the memory requirements for the model.

Soon came Alpaca, which significantly lowered training and inference costs to only USD 600. This cost is notably lower compared to the multimillion-dollar investments typically associated with training such models.

In short, LLaMA and Alpaca have achieved the objectives that were initially envisioned for BLOOM. Alpaca, in particular, facilitated the democratisation of LLMs by making LLaMA accessible to a wider audience.

Through a reduction in fine-tuning costs to a few hundred dollars and by open-sourcing the model, Alpaca empowered developers worldwide to enhance the functionality of this LLM and utilise its power in their projects.

BLOOM is multilingual

Another major reason why we have not seen a wider adoption of BLOOM might be the fact that it’s multilingual. It was the first ever multilingual large language model. It means only just over 30% of its training data was in English.

However, most of the popular benchmarks that exist for evaluation are specifically based on the English language. Further, most of the text models out there are also trained in English, hence, evaluating these models becomes a difficult task.

Reinforcement learning with human feedback

ChatGPT was not the first LLM chatbot to emerge either. Prior to its release, Meta introduced Galactica. However, the model was taken down within two days due to its tendency to produce nonsensical and absurd outputs.

But what OpenAI did differently with ChatGPT was reinforcement learning with human feedback (RLHF). This technique allows AI models to learn from human feedback, making them more accurate and effective in a variety of tasks.

However, BLOOM was not trained with human feedback. In terms of accuracy, BLOOM also lags behind other closed source models in the Stanford University’s HELM benchmark. The model also performs poorly on toxicity metrics and moderately on biases.

Also, most importantly, recently, Stability AI released the world’s first open source LLM trained with RHLF. Called StableVicuna, the model has 13 billion parameters and can perform a variety of natural language processing tasks, including question answering, language translation, and text completion, among others.

Conclusion

Nonetheless, BLOOM has not become completely obsolete either. The model has many use cases, especially for languages other than English. It can also serve as a great translation model because of its rich multilingual corpus.

Besides, BLOOM was a research process and was never developed with the intention to be commercially exploited and be usable by the general public.

The post Why Bloom Didn’t Blossom appeared first on Analytics India Magazine.

The Silence of the Looms 

In the world of fashion, AI has been making its presence felt, with designers leveraging it to create stunning outfits that push the boundaries of creativity. One such example is celebrity fashion designer Gaurav Gupta’s AI-based LED-embedded saree, which was created using IBM’s WATSON, the precursor to modern NLP models.

This saree was worn by actor Archie Punjabi to a Vogue event, and since then, several designers have worked to create similar outfits. However, there is a looming concern that the use of AI may also lead to a decrease in individuality and creativity in fashion.

While AI was used for anticipating market trends and analysing customer preferences more accurately, current capabilities are also taking over the roles of designers. One question remains – what will happen to the traditional weavers and artisans?

Traditional Weaving at Stake

While the convergence of AI and fashion has brought about exciting new possibilities, it also poses a potential threat to the indigenous weavers whose livelihood depends on this. Although there have been a lot of discussions of how AI models can not replace the unique craftsmanship of traditional weavers and artisans, what are the alternatives to displacing these highly skilled yet poor people?

The number of handloom weavers in India has already declined drastically over the years, from 43.31 lakh in 1995-96 to 26 lakh in 2019-2020, which is a staggering 38% decrease. These weavers, who create some of the world’s most exquisite fabrics and designs, earn an average salary of only Rs 10,000 – Rs 11,000 per month while the big brains behind AI are the biggest money makers.

The looming threat of AI in the textile industry cannot be ignored. Automating weaving tasks such as fabric cutting, pattern designing, and loom control, could result in a significant reduction in the number of weavers needed, potentially leading to widespread job loss.

Additionally, the ability of AI to analyse large amounts of data and predict trends may result in homogenous fashion designs that lack the unique touches and personal flair of human creativity, just for the sake of cost reduction.

With the use of AI technology to create intricate and reproducible designs, the future of the hand-woven textile industry looks uncertain, with the very existence of this art form hanging in the balance. With the advancements in these generative technologies, it becomes essential to find alternate solutions that do not displace highly skilled and vulnerable artisans.

AI is Already Impacting Other Sectors

AI has already had a significant effect on various occupations. According to a Goldman Sachs report, about 18% of positions worldwide may be automated by AI, resulting in the loss of over 300 million jobs. This was witnessed when IBM disclosed that it would cease employing 7,800 individuals whose jobs could be performed by AI.

It is clear that as AI continues to automate jobs, it is not only causing concerns about job displacement but also widening the economic disparity between those who create and those who benefit from the technology.

Godfather of AI Geoffrey Hinton, recently left Google, citing the dangers of AI and apprehensions about AI killing jobs. Even the Writers Guild of America (WGA) is protesting against the use of AI in scripting.

Read more: Indian Fashion Brands Arrive Fashionably Late to the Tech Party

The intersection of AI in fashion may seem like a technological advancement for some, but in a country like India, where poverty is rampant and 60% of the population lives on less than $3.10 a day, it could have devastating consequences. It is crucial to strike a balance between innovation and preservation of traditional art forms while supporting vulnerable communities.

Sustainability, but at What Cost?

According to McKinsey’s analysis, the apparel, fashion, and luxury industries can expect a boost in operating profits in the range of $150 billion (at a minimum) to $275 billion (at a maximum) over the next three to five years, thanks to the implementation of generative AI technology.

The fashion industry already contributes to 10% of the world’s annual carbon emissions, surpassing the combined emissions of international flights and maritime shipping. Fashion’s emissions of harmful greenhouse gases are projected to grow by more than 50% by 2030, and possibly even more with the implementation of computation-heavy AI into it.

Apart from putting all the blame on AI, the main culprit driving this is fast fashion. Clothes have gone on to become disposable items rather than durable products. In 2000, around 50 billion new garments were produced, but this number has doubled to 100 billion in just 20 years. Today, people buy 60% more clothes than they did at the turn of the century, but these clothes are only kept for half as long. The fashion industry’s greenhouse gas emissions are projected to increase by more than 50% by 2030.

However, sustainable fashion is expensive because it requires significant effort and costs to adopt sustainable practices. The fashion industry is unsustainable due to the use of harmful chemicals, synthetic fibres, and the generation of waste. Sustainable clothing requires responsible sourcing, production, and consumer education. However, these practices increase the cost, making sustainable clothing, a favourite of the rich.

So now when generative AI comes into the picture, trends can be predicted, creating 3D models of clothing and the carbon footprint is reduced, making it a little more sustainable. But that increases the price dynamically, making it unaffordable. So only the rich can afford them. Gucci, Lacoste, ASOS, Prada have been experimenting with AI for a while now.

But the layer of population that would opt for luxury goods is still low, intensifying the already dire economic situation, leading to even greater inequality and suffering.

The post The Silence of the Looms appeared first on Analytics India Magazine.

Allen Institute for AI Unveils AI2 OLMo, An Open Source Language Model

Allen Institute for AI (abbreviated as A12) recently announced AI2 OLMo (Open Language Model), an open, state-of-the-art generative language model, comparable in scale to other state-of-the-art large language models at 70 billion parameters.

A12 OLMo, expected to be released in early 2024, is a platform that will allow AI researchers to work directly on language models for the first time. The project will provide accessibility to all its elements, including the available data and the code used to generate it. Furthermore, the model, training code, training curves, model weights, and evaluation benchmarks will be available as open-source materials.

“With the scientific community in mind, OLMo will be purpose-built to advance the science of language models,” says Hannaneh Hajishirzi, an OLMo project lead and a Senior Director of NLP Research at AI2. “OLMo will be the first language model specifically designed for scientific understanding and discovery.”

The purpose of this truly open language model is to benefit the research community by providing access and education on all aspects of model creation. AI2 is developing OLMo in collaboration with AMD and CSC, using the new GPU portion of the all-AMD processor-powered LUMI pre-exascale supercomputer, one of the greenest supercomputers in the world.

Apart from collaborating on hardware and computing resources with AMD and LUMI, A12 is also partnering with organisations such as Surge AI and MosaicML for data and training code. The OLMo model and API is touted to be a powerful new resource for the broader community to participate in the generative AI revolution. Additionally, A12 has opened the gates for more such partnerships from organisations for building responsible and beneficial AI technologies.

Founded by late Microsoft co-founder Paul Allen, the Seattle-based institute has over 200 AI researchers, engineers, professors, and staff, and benefits from an annual funding of $100 million. They have published over 1000 research papers in areas of artificial intelligence such as natural language processing, computer vision, and common sense reasoning.

Recently, the institute also raised a fresh $30 million pre-seed fund to invest in “AI-first” startups born from its incubator program. The fund is backed by venture capital heavyweights including Madrona Venture Group, Sequoia Capital, and Two Sigma Ventures, which all previously invested in the first fund.

The post Allen Institute for AI Unveils AI2 OLMo, An Open Source Language Model appeared first on Analytics India Magazine.

How IBM’s Watson X is Scripting the End of Business As We Know it

IBM has joined the generative AI market with its suite of AI tools designed for enterprises. Banking on its strength as an enterprise-focused company, and with many organisations already using its Watson chatbot, the company is looking to revive its former glory with this next-generation technology.

Known as WatsonX, the platform enables enterprises to design and customise large language models (LLMs) as per their operational and business needs. Watsonx comes with a suite of tools for tuning LLMs, a data store built on lakehouse architecture, and an AI governance toolkit. IBM foresees the use of this platform in areas like conversing with customers and employees, streamlining business workflows, automating IT processes, enhancing security, and meeting sustainability objectives.

An example of Watsonx foundational models fused into software products is Watson Code Assistant. Currently, the tool is focused on increasing developer productivity for IT automation, but IBM looks to expand it to other domains such as content discovery, code optimisation, and explanation of code.

Personalisation is the way

When Watson first arrived, it was supposed to change everything. But over the course of a decade, IBM’s artificial intelligence couldn’t achieve its grand vision to remake industries or generate riches for companies. As it turns out, the problems Watson hoped to solve, from tackling cancer to climate change, were a lot harder than anticipated, The New York Times reported.

It seems like this time around IBM has resisted the urge to make and invest in overarching claims, and instead focused on more narrow use cases. “Watson Code Assistant can run on a private vs. public instance, and retraining doesn’t require sharing code with IBM. That’s the paradigm most businesses are waiting for,” said Vineet Vashishta, Founder & CDO at V Squared.

Organisations seek to retrain their generative AI code assistants on their company’s repositories, resulting in personalised and accurate code that meets their team’s specific needs. Vashishta explains that this is precisely why most businesses would not sign up for code assistants using GPT technology today, as exposing proprietary code is a non-starter.

In this regard, personalisation will be the driving force behind most generative AI applications. “IBM is the first company to address that, and it won’t be the last,” he says. But, as it reaches scale, Vashishta makes an interesting observation: “Over the next year, the costs of personalising models and running private instances will drop. Generative models will go from customised to the business to customised to individuals. That’s when we’ll see some truly innovative solutions.”

IBM versus the world

We have already seen alternative developments made by other enterprise-focused companies as well. For instance, in a recent blog post, Microsoft seemed to explain how ChatGPT can work with enterprise data. Azure OpenAI service combined with Azure Cognitive Search can help organisations index and retrieve data, private and external to ChatGPT. Along with ChatGPT, the Azure OpenAI service is already offering products like the code-generating Codex and the image-generating DALL-E 2.

At the recent GTC event, we also saw the unveiling of the NVIDIA NeMo service, which will help enterprises combine LLMs with their proprietary data. Likewise, Amazon Bedrock is a newly launched service by AWS that offers various foundational models to enable businesses to develop and customise their own generative AI applications.

“Among the Watson X announcements, AI studio and data store won’t move the needle much. They are catching up, if that, with other vendors who have been offering better AI data stores and data lakes for many years,” Andy Thurai, principal analyst at Constellation Research, told InfoWorld.

IBM’s direct competitors including SAP, Oracle, and Accenture are all accelerating their efforts to bring generative AI solutions to its clients. As goes with all technology that is hot, the cloud market with generative AI is going through a market clutter. At the same time, a new trend is emerging, similar to what has been observed in the chip industry, known as “coopetition.” Under this model, competitors collaborate with each other on several areas to offer solutions. In this light, SAP has announced its plan to embed IBM’s Watson AI technology into its applications.

This begs the question: Will IBM Watson be able to keep up with its competition or will it crumble…again?

Here, it is important to remember that if and when quantum computing achieves an advantage for enterprises, IBM’s early investment in it has already placed it several years ahead of its competitors.

The quantum bet

During Think2023, IBM CEO Arvind Krishna emphasised that there is much more to come in the technological landscape beyond AI and hybrid cloud, referring to the impact quantum computing will have. When this technology becomes a decisive factor in gaining a competitive edge, IBM will already be ahead of the curve, as it has a head start with a real quantum computer boasting over 400 qubits, running on the cloud.

Read: Quantum Computing Meets ChatGPT

The company has collaborated with research institutes and quantum computing firms to develop new capabilities and launch courses to upskill individuals in this domain, thus enabling the production of talent to match up to the pace of progress.

“Cloud and AI are feeding onto each other, and their combination has led us to achieve an enormous advantage. Without cloud computing, the development of AI would have been much slower. However, by integrating these two technologies with quantum computing, we are poised to reach a remarkable inflection point in this decade,” said Krishna.

The post How IBM’s Watson X is Scripting the End of Business As We Know it appeared first on Analytics India Magazine.

Google Bard AI now generally available

This image displays a logo for Google Bard and Google I/O conference.
Image: Google

Google has closed the wait list for Bard, its generative AI chatbot, after announcing the chatbot’s general availability at the Google I/O conference on Thursday. Bard appears to be a direct competitor of ChatGPT and GPT-4, the generative AI from startup OpenAI, which Microsoft uses in its Bing Chat.

Jump to:

  • How to use Google Bard
  • Google Bard builds on years of research
  • Google wants user feedback about Bard
  • Google competing with Microsoft and OpenAI
  • What tech leaders should consider before using AI

How use Google Bard

Now that the wait list has been closed, you can use Google Bard by visiting bard.google.com and registering with an existing Google account. Accounts managed by a Google Workspace admin are not eligible. Bard runs in Chrome, Safari, Firefox, Opera and Edgium.

Google Bard builds on years of research

Google Bard runs on Language Model for Dialogue Applications, Google’s own large language model. Google first announced it was working on testing its generative AI in-house in January. Google Bard is based on work done in Google’s AI Test Kitchen, which launched in August 2022. Following that, it was available through a wait list starting in March. The tech giant has worked on a machine learning program for generating natural-sounding language starting in 2017.

Bard also hooks directly into Google search. Most conversations come with a suggested search attached, which Google suggests users can do to “check the response.” Google announced at the I/O conference that AI will also be integrated into Search.

Google Bard is only available in U.S. English, with plans to expand it to other languages. In addition, Google plans to add the ability to write code to Bard, as well as “multimodal experiences,” which includes video and audio.

Google wants user feedback about Bard

“Bard is experimental, and some of the responses may be inaccurate,” Google pointed out. The team at Google even poked fun at some of Bard’s suggestions in its own announcement post.

Google also emphasized privacy. The version of Bard that was available through the wait list couldn’t answer questions about what happened earlier in a conversation between itself and a human, for example. According to the Bard FAQ, Google purposely limited Bard’s “ability to hold context.” Google is asking for users to send feedback on Bard’s responses.

In an internal email viewed by CNBC, Google and Alphabet CEO Sundar Pichai told employees, “Things will go wrong. But the user feedback is critical to improving the product and the underlying technology.”

Google is competing with Microsoft and OpenAI

Google Bard’s big name competitor is OpenAI’s ChatGPT. So far, ChatGPT has the advantage of already being able to write basic code, expanding its enterprise applications significantly.

Microsoft’s Bing search uses OpenAI’s GPT-4, the large multimodal model on which ChatGPT is built. ChatGPT is currently available with a subscription to OpenAI’s ChatGPT Plus.

Startup Anthropic, which has gained funding from Google, has its own AI assistant, Claude. Anthropic is playing in the search engine space with a partnership with DuckDuckGo while using Google as a cloud provider.

What tech leaders should consider before using AI

Chat assistance made with large language model AI have faced criticism for returning answers that are easily parsed but factually incorrect. There are also some concerns about the job training AI requires outweighing the productivity benefits, or AI content replacing human-made work and, therefore, limiting numbers of available jobs. Business leaders should take into account whether generative AI is appropriate for their workplace culture as well as whether it improves productivity.

Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more.

Delivered Tuesdays and Fridays Sign up today