HuggingChat Python API: Your No-Cost Alternative

HuggingChat Python API: Your No-Cost Alternative
Image created by Author with Midjourney

You've seen so many alternatives to ChatGPT of late, but have you checked out HuggingChat from HuggingFace?

HuggingChat is a free and open source alternative to commercial chat offerings such as ChatGPT. In theory, the service could leverage numerous models, yet I have only seen it use LLaMa 30B SFT 6 (oasst-sft-6-llama-30b) from OpenAssistant thus far.

You can find out all about OpenAssistant's interesting efforts to build their chatbot here. While the model may not be GPT4 level, it is definitely a capable LLM with an interesting training story that is worth checking out.

Free and open source? Sounds great. But wait… there's more!

Can't get access to the ChatGPT4 API? Sick of paying for it even if you can? Why not give the unofficial HuggingChat Python API a try?

No API keys. No signup. No nothin'! Just pip install hugface, then copy, paste and run the below sample script from the command line.

#!/usr/bin/env python  # -*- coding: utf-8 -*-    from hugchat import hugchat    # Create a chatbot connection  chatbot = hugchat.ChatBot()    # New a conversation (ignore error)  id = chatbot.new_conversation()  chatbot.change_conversation(id)    # Intro message  print('[[ Welcome to ChatPAL. Let's talk! ]]')  print(''q' or 'quit' to exit')  print(''c' or 'change' to change conversation')  print(''n' or 'new' to start a new conversation')    while True:  	user_input = input('> ')  	if user_input.lower() == '':  		pass  	elif user_input.lower() in ['q', 'quit']:  		break  	elif user_input.lower() in ['c', 'change']:  		print('Choose a conversation to switch to:')  		print(chatbot.get_conversation_list())  	elif user_input.lower() in ['n', 'new']:  		print('Clean slate!')  		id = chatbot.new_conversation()  		chatbot.change_conversation(id)  	else:  		print(chatbot.chat(user_input))

Run the script — ./huggingchat.py, or whatever you named the file — and get something like the following (after saying hello):

HuggingChat Python API: Your No-Cost Alternative

The barebones sample script takes input and passes it to the API, displaying the results as they are returned. The only interpretation of input by the script is to look for a keyword to quit, a keyword to start a new conversation, or a keyword to change to a pre-existing alternative conversation that you already have underway. All are self-explanatory.

For more information on the library, including the chat() function parameters, check out its GitHub repo.

There are all sorts of interesting use cases for a chatbot API, specially one that you are free to explore without a hit to your wallet. You are only limited by your imagination.

Happy coding!

Matthew Mayo (@mattmayo13) is a Data Scientist and the Editor-in-Chief of KDnuggets, the seminal online Data Science and Machine Learning resource. His interests lie in natural language processing, algorithm design and optimization, unsupervised learning, neural networks, and automated approaches to machine learning. Matthew holds a Master's degree in computer science and a graduate diploma in data mining. He can be reached at editor1 at kdnuggets[dot]com.

More On This Topic

  • OpenChatKit: Open-Source ChatGPT Alternative
  • 8 Open-Source Alternative to ChatGPT and Bard
  • Alternative Feature Selection Methods in Machine Learning
  • ChatGLM-6B: A Lightweight, Open-Source ChatGPT Alternative
  • Dolly 2.0: ChatGPT Open Source Alternative for Commercial Use
  • MiniGPT-4: A Lightweight Alternative to GPT-4 for Enhanced Vision-language…

This New Programming Language Likely to Replace Python

This New Programming Language Likely to Replace Python

AI infrastructure company, Modular AI, recently unveiled Mojo, a new programming language that combines the syntax of Python along with the portability and speed of C, making it ideal for both research and production.

Besides this, in the Product Launch 2023 Keynote, Tim Davis and Chris Lattner, the person behind LLVM and Swift fame also released one of the fastest, unified inference engines called Modular Platform.

Mojo🔥 combines the usability of Python with the performance of C, unlocking unparalleled programmability of AI hardware and extensibility of AI models.
Also, it's up to 35000x faster than Python 🤯 and … deploys 🏎 pic.twitter.com/tjT09U4F80

— Modular (@Modular_AI) May 2, 2023

The creators of Mojo say that it had no intention of creating a new programming language. “But as we were building our platform with the intent to unify the world’s ML/AI infrastructure, we realised that programming across the entire stack was too complicated,” reads the blog.

This means building a programming language with powerful compile-time metaprogramming, integration of adaptive compilation techniques, caching throughout the compilation flow, and other things that are not supported by existing languages. That is the direction that Mojo is heading towards. The team claims it is 35000X faster than Python.

New Programming Language, Really?

Looks like Julia, the one that was touted as the Python replacement for its scalability and one of the most embraced programming languages of the last few years, competing with Rust, finally has another competitor.

Moreover, according to the documentation of Mojo, instead of starting from scratch, the programming language will leverage the entire ecosystem of Python libraries, while also being built on a brand new codebase. This, along with the high computational ability of C and C++ will enable AI Python developers to rely on Mojo, instead of falling back on C or C++.

One of the major motivations behind building the new programming language according to the developers was that most of the modern programming systems rely on accelerators like GPU for operations, and only “fall back” on main CPUs for supporting operations like data loading, pre and post processing, and integrations into foreign system written in other languages. The company wanted to support the full gamut of this into one language.

Moreover, to not build and innovate a new syntax or community, the company decided to go with Python and its ecosystem. A very smart move indeed!

Mojo is also going to remain open-source till it becomes the superset of Python 3.

Competitions Galore

According to the Stack Overflow Developer Survey 2022, Rust is the most loved programming language, that too for the last seven years continuously. The problem with Rust is its complex syntax, making it a difficult steep learning curve. But even then, Rust is used by Meta, Dropbox, with Google planning to implement it as well.

In the same survey, Julia ranked in the top 5 of the most loved languages, defeating Python. Same was the case the year before that. Viral Shah, the co-creator of Julia, in a decade old interview with AIM, said, “We wanted a language that did not trade-off performance for productivity and instead provided both high performance and high productivity.”

Interestingly, Elon Musk had recently tweeted about how AGI will not be built on Python, but on Rust. This comes after him saying that he is a fan of Rust last year. To this thread, some users replied that they are on the side of Chris Lattner, and hope that it’s Swift, one of the earlier offerings of Lattner. Now, Modular said that “What if it’s the best of all of them?”

What if its the best of all of them? 😏 Tune in May 2nd at 9am to find out on https://t.co/bhbmGy7hYb 🔥 https://t.co/vXetLqqKQs pic.twitter.com/S15DOjA1aH

— Modular (@Modular_AI) April 22, 2023

Addressing a lot of these questions on HackerNews about the comparison being made with Julia and Rust, and also future plans to compete with Python, Chris Lattner, one of the co-creators, praises Julia as a “wonderful language and a wonderful community,” calling himself as a super fan. Addressing the differences between Julia and Mojo, he stresses on the point that Mojo has a bunch of technical advancements when it comes to languages like Swift, Rust, C++, and even Julia, because it has learnt from them, and built over them.

He further adds that there is definitely space in the AI/ML landscape for another language that makes it easier to deploy and scale down models, while also supporting the full Python ecosystem. He further said, “Julia is far more mature and advanced in many ways.” Interesting how Lattner looks at a problem and decides to make a new programming language altogether, as pointed out by a Twitter user.

Though the developers have been humble about how they are taking the approach with Python, the community on HackerNews and Twitter is all out comparing it with Python.

A Game Changer?

Python, or even Julia, isn’t a preferred programming language when it comes to systems programming, but mostly for AI model building. Though it overcomes that limitation with low-level binding to C and C++ for building libraries. But building these hybrid libraries is a complicated task that requires the knowledge of C and C++ as well. This is where Mojo comes in and makes it into a one integratable and backwards compatible Python-like language – “Pythonic-C”

But on the other hand, whenever there is a new technology coming in place, there are the sceptics and naysayers, who sometimes also bring up interesting points. While some people on Hacker News forum are arguing that this might be a Python replacement, some are still sceptical about the performance improvement that the creators of the language promise. Some even don’t call it ‘Pythony’, which the developers behind the language have made efforts to stay away from.

Python seems like it’s becoming the language of the future for its ease of use, utility across many distinct domains, and support through various large language models.
The duct tape of programming languages.
Wouldn’t have expected it. https://t.co/Wmf7gKCNHr

— Andrew Ruiz (@then_there_was) May 2, 2023

Another person on the forum calls Mojo the final nail in the coffin for “Julia as a replacement for Python”. Maybe Julia has missed out on its window of opportunity to replace Python, and Mojo is here to do the job. Still, the arena of programming languages remains unpredictable.

Moreover, this might be just another Julia moment in the world of programming, with Python syntax. Anyway, OpenAI is on a somewhat similar mission with Triton, their own programming language.

The post This New Programming Language Likely to Replace Python appeared first on Analytics India Magazine.

Breaking the Silence: LLMs Docode Enigmatic World of Animal Language

“In the realm of enchanted LLMs, GPT-4’s magic recently saved a dog’s life by detecting a disease through symptoms. Now, scientists blend animal wisdom with LaMDA’s secrets, embracing GPT-3’s essence. Within the Earth Species Project, they pursue the holy grail: the decoding of animal tongues. Past knowledge weaves a tapestry, birthing a melodic ‘Rosetta Stone,’ revealing creatures’ secret symphonies.”

The shape of the problem

While it seems natural to allow NLP algorithms to find the structure of animal communication, the problem has many more facets. One of the most basic pitfalls is that noises are simply just one of the modalities that animals use to communicate. Visual and tactile stimuli play an equally important role in animal communication as auditory stimuli.

Human communication is built on many social norms and structures, which serves as a fallback when it comes to decoding ancient languages. However, animal communication has no such reference point, as each species has their own syntax of communication.

Collecting animal data also comes with its own set of ethical problems. Research has found that even if experiments aren’t constructed to be manipulative and are purely observational, scientists still intervene to monitor. This may cause distress to the animals they are studying, poisoning the data and raising ethical concerns that scientists are now being required to evaluate.

Apart from decoding the communication itself, making sense of animal data is also a behemoth undertaking. As opposed to human data, which is annotated in an intuitive way (for humans), animal data is not only difficult to procure, but also needs specific research to annotate.

These are just some of the issues that researchers need to face when looking at animal communication. In fact, scientists have been trying to decode animal communication since the late 1950s, resulting in a comprehensive corpus of research on the topic.

Building on this legacy, the Earth Species Project (ESP) aims to solve some of the field’s long-standing issues through the use of AI. From mapping out the vocals of crows to building a benchmark of animal sounds, ESP is laying the foundation for future AI research in the field. Towards this end, the project also has a roadmap detailing how they wish to use AI to shorten the communication gap between species.

Solution: SSL

According to ESP, there are 4 main facets to the communication problem — data, fundamentals, decode, and communicate. The project aims to bring AI solutions for each of these sub-problems. For data, it is building out self-supervised models to annotate and interpret collected data, among other undertakings. On the fundamental side, it is focused on building foundational models, such as the AVES (animal vocalisation encode based on self-supervision) model.

For decode, ESP is building out self-supervised pattern detection models. Finally, for communication, it is creating a generative AI solution with the help of Google to generate the communicative signals of animals. To this end, one of their most interesting projects currently is the generative vocalisation experiment, which adds to researchers’ arsenal of bioacoustics research tools. Using this approach, researchers can play back edited vocalizations, thus deepening their understanding of animal communication.

Using AI, ESP was able to turn semantic relationships into geometric relationships — the same process that was used to make NLP algorithms more efficient in the past. By visualising the relationships between various words, it becomes possible to create a shape for a given language. This has been successfully implemented for human languages, but ESP believes they can go further. Aza Raskin, one of the co-founders of the Earth Species project, stated,

“We’re building on top of the research in the human domain, we’re developing models that can work with bats and whales. With humans you almost always know you can ground truth at some point, you can always check. That’s not the case with animals.”

However, it is possible for certain self-supervising algorithms to approach this problem from a different angle. By finding patterns in a large number of similar datasets, latent space created by LLMs can provide important insights into animal communication. But, there is a risk of solving the problem of communicating with animals without knowing what we are communicating to them.
The idea of picking self-supervised learning (SSL) to solve this problem is also an interesting choice. SSL algorithms are arguably the least explainable form of AI, but they might just be the best fit for understanding animal vocalisations. Using the fundamentals set in place today, ESP believes that they can “build an audio chatbot for animal communication that nobody yet understands.”

The post Breaking the Silence: LLMs Docode Enigmatic World of Animal Language appeared first on Analytics India Magazine.

Stability AI’s Unstable Behaviour

Since the launch of Stable Diffusion, Stability AI has emerged as an open-source champion. The startup gained prominence when it open sourced its AI model, allowing anyone to build on the company’s code or even contribute to it.

Stability AI disrupted the generative AI scene with Stable Diffusion, which was a breakthrough in open-source text-to-image models. Prior to this, such models were proprietary. The startup’s innovation put it in direct competition with Silicon Valley’s heavyweights, including OpenAI.

Riding on the success of Stable Diffusion, the company also raised $101 million in the following months. However, since then, the company has been plagued by a series of predicaments.

Very recently, Sifted, an European publication backed by Financial Times, claimed that Stability AI neither developed the original Stable Diffusion code, nor they own the intellectual property (IP) to the model.

Stable Diffusion, was in fact, a German research project at the University of Heidelberg (now Munich). The codes for the text-to-image model were released by Ludwig Maximilian University of Munich in 2021.

The report alleges that the startup might have withheld key information to its investors, including the fact that they did not own the IP rights to Stable Diffusion, their flagship AI model.

Nonetheless, the report does state that although Stability AI did not possess any ownership rights to the Stable Diffusion AI model, it played a significant role in enhancing its capabilities.

Interestingly, this was not the only instance where serious allegations have been made against Stability AI. The company has already received criticism for facilitating the creation of objectionable content like graphic violence and pornographic and non consensual celebrity deep fakes.

Class action lawsuit

In January, a class action lawsuit was filed at the District Court of San Francisco against Stability AI, as well as MidJourney and DeviantArt, by a group of artists, who alleges that their artwork was illegally used by these companies to train their models and create new images.

“The plaintiffs and the Class seek to end this blatant and enormous infringement of their rights before their professions are eliminated by a computer programme powered entirely by their hard work,” the lawsuit says.

Similarly, Stability AI is also being sued by Getty Images over alleged copyright violation.

“It is Getty Images’ position that Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images absent a licence to benefit Stability AI’s commercial interests and to the detriment of the content creators,” the stock photo company said in a statement.

Previously, Getty Images has prohibited the submission of content generated using Stable Diffusion due to concerns regarding potential intellectual property conflicts.

Burning cash

In early April, Semafor published a report where it claims that Stability AI is burning through cash and is struggling to generate revenue.

In a tweet, Stability AI’s chief Emad Mostaque did reveal that the training cost of Stable Diffusion is close to $600,000, as it took over 150,000 GPU hours on AWS servers. Similarly, the business Business Insider also reported, the startup’s operations and cloud expenditure exceeds $50 million.

Semafor claims Stability AI has already burned through a major chunk of the $101 million it raised last year and investors are sceptical about participating in a fresh fundraising round.

The report further claims that employees have lost faith in Mostaque’s leadership style. His preference to provide AI researchers with complete independence, granting them access to expensive server time without any supervision is sometimes seen as counterproductive by some of Stability AI’s employees.

Still on the front foot?

Despite its struggles, the company appears to be on the front foot. Recently, Stability AI released a family of large language models they called StableLM.

This model, currently in its alpha version, is available in 3 billion and 7 billion parameter versions, with the company promising the future release of 15 and 65 billion parameter versions.

Earlier this year, the startup also acquired Init ML, the developer of the acclaimed AI-powered imaging tool Clipdrop for an undisclosed amount. As per the deal, Stability AI will integrate its generative AI into the Clipdrop platform to make it accessible to creators, which already boasts 15 million users.

Now, it appears that the company is gearing up for a fresh round of funding, which is expected to be more than the USD 101 million they raised last year.

Furthermore, Mostaque has also hinted that he could possibly take his company public in the next few years. Nonetheless, he also did acknowledge that in order to do so, a company needs amazing revenue, amazing margins, and distribution.

While Mostaque is dreaming big, with Stability AI facing significant challenges with its revenue and margins, as well as lawsuits and allegations, the path ahead does appear to be gloomy for Stability AI.

The post Stability AI’s Unstable Behaviour appeared first on Analytics India Magazine.

Rakuten India Announces the 2nd Edition of RPC 2023 – Generative AI & Future of Cloud

Rakuten India Announces the 2nd Edition of RPC 2023 – Generative AI & Future of Cloud

Rakuten India, the Bangalore-based technology hub for Rakuten Group, Inc, has announced the Rakuten Product Conference (RPC) 2023 in association with Analytics India Magazine. To be held virtually, RPC is a marquee product conference of Rakuten India for data scientists, AI evangelists and innovators across the globe with the ‘Generative AI and Future of Cloud’ themes. The conference will highlight the role of applied artificial intelligence in solving business problems and real-world challenges.

The massive disruptions that AI has gone through in the Generative AI realm have impacted businesses deeply. The Rakuten Product Conference will have the who’s who from the industry, host speaker sessions and panel discussions on the recent developments in generative AI and the key cloud strategies in companies.

The two-day virtual conferencescheduled for May 31 and June 1 — will see participation from leading data science professionals, AI practitioners, and decision-makers driving AI and cloud projects across organisations. Additionally, the virtual conference will take a deep dive into the latest AI business use cases for generative AI and implementation opportunities and showcase the challenges of certain domains.

Rakuten Product Conference (Theme: Generative AI & Future of Cloud) is expected to line up several notable personalities from the industry. The conference anticipates the participation of 2500+ attendees from around the world, who would gain the know-how of tackling day-to-day AI challenges and ways to foster innovation in a dynamic world.

REGISTER NOW

Key Highlights of the Rakuten Product Conference 2023:

  • Talks by leading experts from the industry for both tracks.
  • Learn how businesses can transform themselves using Generative AI.
  • Network with data artists and game-changing researchers who drive AI strategies in their organisations.
  • Experts from some of the leading organisations give you a sneak peek into the future of the cloud and their plans of action.
  • Exclusive opportunity to connect and build long-lasting relationships with prominent industry people.

Who Should Attend:

CIOs, CTOs, AI experts, innovation heads, data scientists, teams working on analytics & business intelligence and anyone interested in the AI and analytics domain can gain valuable insights from this two-day virtual generative AI and Future of Cloud conference.

Registration for the Rakuten Product Conference (Theme: Generative AI & Future of Cloud) is open for all.

Date: May 31 & June 1 (Online)

Time: 09.30 AM– 2.30 PM

To know more, click here.

REGISTER NOW

About Rakuten India Enterprise:

Rakuten India, the development centre and key technology hub of the Rakuten Group, Inc, enables businesses with expertise and thorough knowledge in multiple streams of technology such as mobile and web development, AI/ML, web analytics, platform development, backend engineering, data science, and much more. Its unique 24/7 support centre ensures the reliability and strength of the Rakuten Ecosystem.

With dedicated centres of excellence for mobile application development, data analytics, engineering, DevOps and information security, the company ensures the success of multiple units of Rakuten Group, Inc. With 1500+ employees and growing, Rakuten India is based in Crimson House, Bangalore.

The post Rakuten India Announces the 2nd Edition of RPC 2023 – Generative AI & Future of Cloud appeared first on Analytics India Magazine.

Gartner: Public cloud end-user spending forecast to hit $597.3B

cloud illustration with symbols related to costs and money surrounding it
Image: ArtemisDiana/Adobe Stock

An April forecast from Gartner expects public cloud service end user spending to grow 21.7% to total $597.3 billion in 2023, up from $491 billion in 2022. Emerging technologies like generative AI, Web3 and the metaverse (or virtual reality) are among the factors driving increased use of public cloud services, Gartner found.

Gartner sees a smooth climb to 2026, by which time they predict 75% of organizations will use a digital transformation model based on the cloud.

Jump to:

  • Public cloud spending growth broken down by sector
  • The main drivers of uptick in cloud spending
  • Technological developments changing buying decisions
  • The effect of generative AI on public cloud spending

Public cloud spending growth broken down by sector

Gartner broke public cloud spending down into several sectors.

  • Infrastructure-as-a-service is expected to experience the highest end-user spending growth in 2023 at a 30.9% increase from 2022.
  • Platform-as-a-service is expected to grow at 24.1%.
  • Software as a service, the largest segment of the cloud market by user spending today, will continue to grow at a rate of 17.9% to $197 billion in 2023.

Sid Nag, vice president analyst at Gartner, said providers on the application layer hear that customers want to “redesign SaaS offerings for increased productivity, leveraging cloud-native capabilities, embedded AI and composability.”

Economic uncertainty may also be a factor in today’s buying decisions, but don’t seem to have slowed down cloud investments in particular.

The Wasabi 2023 Global Cloud Index Storage report in January pointed out that 84% of the the organizations it surveyed plan to increase their public cloud spending in 2023. Many organizations (70%) already store their global storage capacity in public and dedicated clouds.

The main drivers of uptick in cloud spending

“The next phase of IaaS growth will be driven by customer experience, digital and business outcomes and the virtual-first world,” said Nag. “Emerging technologies that help businesses interact more closely and in real time with their customers, such as chatbots and digital twins, are reliant upon cloud infrastructure and platform services to meet growing demands for compute and storage power.”

SEE: How does ChatGPT work?

Digital transformation is still an ongoing concern; many organizations that have not completely made the digital switch want to start out on the cloud.

“Web3 and metaverse ultimately require a massively scalable infrastructure to run on in order to provide the end user experience for applications that leverage these technologies,” Nag told TechRepublic via email.

Tech buyers “should calibrate their cloud spend based on how extensively they plan to leverage these technologies for their applications and workloads,” he said.

Technological developments changing buying decisions

Organizations are looking for technology that can:

  • Optimize their operations or the trust they show to their customers.
  • Scale their solutions or product delivery.
  • Pioneer new audience interactions or business opportunities.

All of these features could be served by software hosted in the cloud, depending on the company’s needs.

“Focus on the move to digital. Don’t hunker down,” Nag advised. “Seize the opportunity to get a leg up on the competition by carefully increasing cloud spend in targeted areas in a prescriptive manner to drive their digital transformational initiatives – both externally from a B2B and B2C perspective, as well as modernizing … internal IT by embracing digital technologies and transformational models.”

With many tech purchases being made by business leaders outside of IT, who is doing the buying can sometimes be hard to predict. However, IT often has a hand somewhere in the process, and is therefore still a key audience for tech buyers.

The effect of generative AI on public cloud spending

Generative AI is now a major factor in companies’ tech buying decisions. It takes a lot of computing power, and also faces a barrier to trust. Privacy is a major concern around generative AI, but it doesn’t behoove organizations today to pause if they plan to implement it, Gartner’s VP Analyst Avivah Litan said.

Since hyperscalers are best equipped to handle the infrastructure for large language models, they are keeping an especially close eye on developments in that space.

“Hyperscalers are positioning themselves for [large language models and foundation models] as they continue to enhance their cloud capabilities to support the needs of these AI models,” Nag said.

Person using a laptop computer.

Cloud Insider Newsletter

This is your go-to resource for the latest news and tips on the following topics and more, XaaS, AWS, Microsoft Azure, DevOps, virtualization, the hybrid cloud, and cloud security.

Delivered Mondays and Wednesdays Sign up today

Raining Quantum Investments, But Talent Still an Issue

Like when the world transitioned from computer processors to graphics processors, we are currently witnessing another paradigm shift in which quantum computing is emerging from the shell of research and development to the forefront of mainstream technology. A McKinsey report published recently sheds light on the state of quantum technology today.

As per the report, quantum technology start-ups, which include companies in the domain of quantum computing, communications, and sensing, received an investment of $2.35 billion from investors in 2022. This amount exceeded the record for the highest annual level of investment in quantum technology start-ups, set in 2021. Moreover, four of the biggest deals in the 2000s closed in 2022.

However, the report also highlights that more investments are going into established startups than to new companies. Numbers show that only 19 quantum technology startups were founded in 2022 compared with 41 in 2021, bringing the total number of start-ups in the quantum technology ecosystem to 350.

One of the companies that recently raised $24M in Series A funding is Strangeworks. Strangrworks is a software company that provides a cloud-based platform for developers, researchers, and enterprises to access and use advanced computing resources, like quantum simulators and quantum hardware.

Quantum becoming mainstream

William Hurley, Founder & CEO of Strangeworks, identifies two industry trends when it comes to providing scalable quantum solutions. First is the development of hybrid classical-quantum computing systems, which combine the strengths of classical and quantum computing to solve problems more efficiently, which can be useful for tasks such as optimisation problems or machine learning tasks.

The second trend involves the development of software tools and platforms that enable organisations to manage and scale their quantum computing resources efficiently. These tools include schedulers and optimizers for quantum computations, as well as resource and data management tools across multiple quantum computing platforms.

The case for a hybrid classical-quantum computer was also made by Timothy Costa, Director of HPC & Quantum at Nvidia, who told AIM, “while today’s QPUs are not capable of providing advantage in production applications, GPU supercomputers are time machines allowing researchers to work on future quantum systems that may accelerate critical workloads.”

At GTC 2023, Nvidia generated buzz when it announced a new system called the DGX Quantum system, developed in collaboration with Quantum Machines. The system will utilise the newly open-source CUDA Quantum open-source software.

Before you assume that a hybrid system involves mixing bits and qubits, Costa explains that this is not the case. The hybrid system operates by exposing familiar programming models, compilers, and toolchains for each type of accelerator, making it easy for domain scientists to map tasks to the processor (quantum or classical) that is best suited for the job. As a result, work is divided into discrete tasks that can be mapped to the processor of choice.

“For quantum computation, domain scientists describe tasks for the processor at a high level, and the compilation toolchain lowers this to a representation that the quantum processor can readily understand and execute,” adds Costa.

Talent Shortage

Apart from the technical challenges that arise when scaling up quantum systems for practical applications, Hurley highlighted the lack of available talent with expertise in both quantum and traditional computing as one of the biggest bottlenecks to quantum adoption. While the McKinsey report provides some cause for celebration, the overall situation remains grim.

The report states that the talent gap narrowed in 2022 compared to 2021, partly due to more academic institutions integrating quantum into their curriculum. According to their analysis, the remaining jobs could be filled by graduates from fields related to quantum technologies, which produce approximately 350,000 master’s-level graduates worldwide each year.

There are significant investments in programs related to critical subjects in quantum computing. “Despite the industry not experiencing significant growth, thousands of students and engineers are investing in certification courses to upskill themselves in areas related to quantum technology,” L Venkata Subramaniam, IBM Quantum India Leader, had told AIM.

The transition to quantum computing is expected to create four central job opportunities: hardware (building quantum computers), middleware (interconnecting hardware and software), research (developing algorithms that can run on today’s quantum computers), and data scientists or application developers (coding on top of the application layer).

However, Subramaniam believes that for those interested in hardware, the path is more challenging. According to him, picking up these concepts from online self-learning is difficult, and university courses are limited.

The post Raining Quantum Investments, But Talent Still an Issue appeared first on Analytics India Magazine.

ChatGPT Dominates as Top In-Demand Workplace Skill: Udemy Report

ChatGPT Dominates as Top In-Demand Workplace Skill: Udemy Report April 29, 2023 by Jaime Hampton

Online learning platform Udemy released its Q1 Workplace Learning Index that found ChatGPT is currently the most in-demand workplace skill in the U.S. and globally.

The quarterly report highlights skills that are gaining demand among professionals seeking to keep pace with innovation. Udemy analyzes data from its nearly 14,000 global customers to uncover which skills saw the largest increase in course consumption (i.e., minutes spent learning) in Q1 2023.

Courses on ChatGPT grew by 4,419% globally, the report found. In the U.S., ChatGPT interest grew by 5,226% compared to Q4 2022. Nearly 470 new ChatGPT courses were added to the Udemy platform this quarter, garnering over 420,000 enrollments as of March 31. Udemy says its business learners are especially focused on using ChatGPT for copywriting support to help boost SEO, idea generation to enhance visual presentations, and large-scale email creation to improve productivity.

(Source: Udemy)

The report underscored other emerging topics in artificial intelligence and machine learning, including a global surge in interest in Azure Machine Learning (281%), AI Art Generation (239%), Amazon EMR (227%), and Midjourney (218%).

Data-related industry certifications were also a focus for learners worldwide. For example, courses on Databricks Data Engineer Associate certification (320%) were #4 on the list of top tech skills globally and have continued to grow in popularity within both the U.S. (330%) and India (341%) markets. Among government organizations, cybersecurity certification courses saw a 280% increase.

“The Udemy platform connects more than 70,000 instructors with 59 million learners globally, enabling us to spot emerging trends early and equip learners with fresh content to help them quickly master and implement new skills within their workflow,” said Scott Rogers, senior VP of supply strategy at Udemy. “The skills required for professionals continue to rapidly shift, so it’s critical that leaders provide the training resources needed to keep up with advancements such as those related to artificial intelligence — to benefit both the employee and the company’s ability to get important projects done.”

Stephanie Stapleton Sudbury, president of Udemy Business, added, “Organizations that encourage ongoing learning and provide flexible skill development opportunities can help attract and engage top talent, increase productivity, and achieve greater business outcomes.”

Related

7 Open Source Models From OpenAI

Elon Musk co-founded OpenAI—-the firm behind the wildly popular ChatGPT. But he has been vocal about the company not staying true to its name. Musk recently tweeted expressing his disappointment at the company becoming “a closed source, maximum-profit”.

“OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft,” he tweeted.

The company has previously faced criticism for its closed-door policy, not only from the Tesla CEO, but a host of industry experts and members of the open-source community. The company has been accused of taking advantage of the open-source community without giving back.

However, OpenAI is now trying to embrace the open-source approach. By going open again with its ‘Consistency Models’, OpenAI is positioning itself for greater collaboration and contribution to the open-source community.

The company has previously open sourced quite a few models. Here are some Open source models from OpenAI:

Evals:

OpenAI open-sourced a software framework called Evals that allows users to evaluate the performance of AI models. The framework enables users to identify deficiencies in their models and provide feedback to direct improvements. OpenAI staff will actively review these evaluations when considering improvements to upcoming models. The tools are aimed at creating a vehicle to share and crowdsource benchmarks that represent a wide set of failure modes and difficult tasks. OpenAI plans to grant GPT-4 access to those who contribute high-quality benchmarks.

Whisper:

OpenAI introduced a multilingual speech recognition system called Whisper in September of 2022. Whisper is trained on 680,000 hours of multilingual and multitask supervised data. Whisper uses a simple end-to-end approach implemented as an encoder-decoder Transformer and has improved recognition of background noise, unique accents, and technical jargon. It does not beat models that specialise in LibriSpeech performance but shows robust zero-shot performance across many diverse datasets, making 50% fewer errors than other models. The open-sourced models and inference code will allow developers to add voice interfaces to a wider set of applications.

Dall-E:

DALL-E and its successor DALL-E 2 are deep learning models developed by OpenAI that generate digital images from natural language descriptions. DALL-E 2, designed to generate more realistic images at higher resolutions, entered into a beta phase with invitations sent to 1 million waitlisted individuals in July 2022, and was opened to anyone in September 2022. In November 2022, OpenAI released DALL-E 2 as an API, allowing developers to integrate the model into their own applications, and Microsoft unveiled their implementation of DALL-E 2 in their Designer app and Image Creator tool included in Bing and Microsoft Edge. The API operates on a cost per image basis.

Spinning up:

Spinning Up is an educational resource by OpenAI for learning about deep reinforcement learning (deep RL), which is a combination of machine learning and deep learning. It includes an introduction to RL terminology and theory, an essay on becoming an RL researcher, a list of important papers, code implementations of key algorithms, and exercises.

CLIP:

OpenAI CLIP is a machine learning model that uses natural language descriptions of images to perform tasks related to natural language and image processing. It can classify images, detect objects, and retrieve images based on text prompts. CLIP is trained on a large dataset of images and captions and is available as an open-source model. Its unique feature is that it can perform well on a variety of tasks without the need for annotated image data.

Jukebox:

OpenAI Jukebox is a generative model that creates music using deep neural networks trained on a large dataset of music samples from various genres. It can generate original music samples that are similar in style and structure to different types of music. Jukebox can also generate music with lyrics based on a given prompt. It is an open-source project used by researchers and musicians worldwide to explore the capabilities of generative models in the creative arts.

Point-E

OpenAI’s GPT-3 Point-Eleven or Point-E is an optimised variant of its language model GPT-3 for conversational AI applications. It uses a larger context window and other optimisations to improve the naturalness and coherence of the model’s responses in conversations. Point-E is not available as a standalone model, but is offered through OpenAI’s GPT-3 API, which provides various language-based services, including text completion, question-answering, and conversational AI.

The post 7 Open Source Models From OpenAI appeared first on Analytics India Magazine.

Researchers Have Trained an AI to Decode Human Thoughts

Researchers Have Trained an AI to Decode Human Thoughts May 2, 2023 by Jaime Hampton

Researchers at the University of Texas at Austin have developed a new artificial intelligence system that can translate a person’s brain activity into a continuous stream of text.

Called a semantic decoder, the system is a noninvasive method that first involves measuring brain activity using an fMRI scanner, an imaging machine that tracks blood flow across different parts of the brain. The semantic decoder is trained through this imaging while the patient listens to hours of podcasts in the scanner.

For the study, three people listened to podcasts from headphones for up to 16 hours each in the fMRI scanner. Much of the listening material consisted of stories from "The Moth Radio Hour," the popular public radio show with a weekly podcast.

After the decoder is trained in this method, if the patient is open to having their thoughts decoded, they can listen to a new story or imagine telling a story and the machine will generate corresponding text from brain activity alone, according to a report from UT News.

For its predictive text generation, the decoder uses a transformer language model comparable to the large language model powering ChatGPT. Instead of a verbatim transcript of the patient’s thoughts, the system produces text that only partially matches the intended meanings of the original words. The UT News report included the following example:

These example segments were manually selected and annotated to demonstrate typical decoder behaviors, says UT News. (Credit: University of Texas at Austin)

This study was led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. The study results were published in a paper in the journal Nature Neuroscience. The paper addresses concerns about patient privacy and the possibility for misuse of this technology. Decoding was only possible with patients who had willingly participated in training the decoder. The study notes that results were incoherent for individuals who did not train with the decoder or who purposefully thought about other things.

The researchers say they are taking privacy and safety concerns seriously and want to make sure people only use this technology voluntarily and to help others, according to UT News.

Though the system is not practical due to its need for a bulky fMRI machine, the researchers believe the technology could shift to a more portable brain imaging format like functional near-infrared spectroscopy (fNIRS). The technology could be a solution for patients who are left unable to speak due to health issues like strokes or neurological diseases. Read the original article and listen to a podcast that dives into the research methods at this link.

Related