Upgrade your ChatGPT skills with this $20 training: Use AI to work smarter, not harder

OpenAI's ChatGPT has already become an integral tool for many professionals when it comes to streamlining workflows and saving time. However, there is more to learning to use ChatGPT than simply entering prompts and revising what the AI feeds back. If you want to use ChatGPT to craft high-quality text, support ideation, and more, then you may want to enroll in this online training that gives you a solid foundation and upskills you on AI for just $20 (reg. $52).

Become a more advanced ChatGPT user in four hours

This online course load contains dozens of lessons breaking down beginner and advanced skills to help you use ChatGPT effectively. If you are new to the AI chatbot or want to make sure you didn't miss any tips and tricks when you first jumped in months ago, then start with ChatGPT for Beginners, taught by Mike Wheeler. Wheeler is a cloud computing instructor who shows users how to write basic prompts to answer questions and craft prose.

For more advanced instruction, move on to ChatGPT: Artificial Intelligence (AI) that Writes for You. This course contains 12 lectures showing you how to write ChatGPT prompts that generate blog posts, sales copy, and other more professional content. Business owners may be able to save money or time writing content for their websites by using the chatbot, but of course, that content — business or otherwise — will still need some human revision and you'll have to verify that the words are original.

ChatGPT is already being used to automate basic tasks in the professional sphere, and you may be able to customize your own bot and make it work for you. The final two courses in this bundle require more technical experience, but they show users how ChatGPT can help you complete some of the tasks outlined in Create a ChatGPT A.I. Bot with Tkinter and Python and Create a ChatGPT A.I. Bot with Django and Python. Between these two courses, learners can find out how to use the Generative Pre-training Transformer tech to craft their own applications for ChatGPT, including building interactive coding websites and automatic text generators.

Take advantage of the power of this large language model

The accessibility of this NLP technology has given the power of AI to the masses. Gaining more advanced skills could help you work smarter and stand out from the crowd.

Get the Complete ChatGPT Artificial Intelligence OpenAI Training Bundle for $20 (reg. $52).

ZDNET Recommends

6 Brain-Inspired Computing Methods Bringing Sci-Fi to Reality

John Mccarthy, a computer scientist coined the term Artificial intelligence which replaced Pitts’ and McCulloch’s ‘nerve net’ and has since then been defined differently by experts throughout history. The combination of ‘human intelligence in machines’ is uniform in all these definitions.

Demis Hassabis who studied neuroscience for his PhD says, “I studied neuroscience for my PhD — to look into the brain’s memory and imagination; understand the mechanisms involved; and then [use that to] help us think about how we might achieve these same functions in our AI systems.”

Here is a list of the most famous brain inspired computing methods –

Artificial Neural Network – ANN

A neuron is like a tiny part of a brain-like computer system. It gets messages from other parts, does some math on them, and then sends out its own message. The connection between these parts is called a “synapse,” and how strong the connection is can be thought of as a “weight.”

The messages themselves are called “activations,” and the math the neuron does is a special kind of calculation. All these neurons working together make up something called an Artificial Neural Network (ANN). There are different kinds of these networks, like one that’s good at recognising pictures, one for patterns, and one for sequences of things.

Spiking Neural Networks – SNN

Imagine you have a regular computer system (called an artificial neural network or ANN) that does certain calculations. Now, with SNNs, there is an addition of something special: instead of just doing math, each part of the system can also “spike” like a little electrical signal, like in our brain.

This makes SNNs a bit like regular computer systems, but with an extra brain-like feature. Imagine if each part of the computer could send quick signals to each other. These signals can be simple or more complex, like how our brain’s cells communicate. Even the way these signals travel can be different, like how they move in our brain’s different parts.

These spiking neural networks can be used for lots of smart tasks like recognising images, understanding speech, and even making self-driving cars smarter. They’re a newer way of using computers that’s more energy-efficient and closer to how our brains work.

Neuromorphic computing

This is like making computer parts act like brain parts. It uses special chips with tiny circuits to copy how our brains work. These chips can do smart things like neurons in our brain. The term “neuromorphic” can mean different types of chips, whether they’re all electronic, a mix of electronic and other, or even just software. It’s like a special kind of brain-like computer that focuses on using tiny hardware pieces to do clever calculations.

Current AI systems that use artificial neural networks have some limitations. They keep data and calculations separate, and they use digital devices to represent continuous signals, which is not very natural. But there’s a new idea called neuromorphic computing that makes these systems work more like real brains. They use special parts called memristors, which are like little switches for electricity. These memristors can remember charges, acting like the connections in our brain. They’re good at handling complex data like images and videos, and they do it much faster and with less power than regular systems.

Scientists are making these memristors using different materials like graphene and even biological cells. They’re also looking at using light to process data, which can be super fast, but it’s still expensive to make those kinds of systems. Interestingly, even tiny living organisms like slime molds can do some similar things, and they might help in making these smart computing parts.

Reservoir computing

Just like neurons in our brain, the reservoir in reservoir computing processes data in a complex and parallel manner. When we receive input in our brains, it goes through a network of neurons that collectively transform the input into meaningful patterns. Similarly, in reservoir computing, the input data is processed by the interconnected units in the reservoir, which transforms the input into a more complex and informative representation.

The output of the reservoir, which is obtained after the data has passed through the reservoir’s internal dynamics, is analogous to the processed information in our brain. This output can be used for tasks like prediction, classification, and pattern recognition, similar to how our brains make sense of the world around us.

In essence, reservoir computing emulates the distributed and parallel processing observed in the brain, making it a brain-inspired approach to computing that leverages the principles of neural dynamics for various applications.

Quantum Computing

Quantum computing is a new way of making computers that are super good at solving really hard problems. Instead of using regular computer bits that are just 0 or 1, quantum computers use special bits called qubits. These qubits can be 0, 1, or both 0 and 1 at once, thanks to weird rules from the tiny world of quantum physics. This “both at once” thing makes quantum computers super fast at exploring many possibilities all at the same time.

They’re like super-smart puzzle solvers that can look at lots of answers together. But building and using quantum computers is tricky because they’re picky and need special conditions to work. Scientists are working on making them better, and when they’re ready, they could help solve problems in medicine, technology, and more.

Hyperdimensional Computing – HDC

Imagine a new way of doing computer stuff called hyperdimensional computing. Instead of using separate bits of info, like a car’s make, model, and coloUr, we put them all together as one thing called a “hyperdimensional vector.” This vector is like a special list of numbers.

Normal vectors have three numbers for x, y, and z (like 3D games). But hyperdimensional vectors can have lots more numbers, like 10,000! This helps computers do really smart things and goes beyond current limits. These special math things could change how we do computer smarts and artificial intelligence.

Like the brain it makes connections between things to understand and remember. It’s like teaching computers to think and remember in a creative and brain-like way, which helps them do smart tasks more efficiently. This makes connections like the brain.

The post 6 Brain-Inspired Computing Methods Bringing Sci-Fi to Reality appeared first on Analytics India Magazine.

Best Python Tools for Building Generative AI Applications Cheat Sheet

The Voice of a New Generation

KDnuggets has released an insightful new cheat sheet highlighting the top Python libraries for building generative AI applications.

As readers are no doubt aware, generative AI is one of the hottest areas in data science and machine learning right now. Models like ChatGPT have captured public imagination with their ability to generate remarkably high-quality text from simple prompts.

Python has emerged as the go-to language for developing generative AI applications thanks to its versatility, vast ecosystem of libraries, and easy integration with popular AI frameworks like PyTorch and TensorFlow. This new cheat sheet from KDnuggets provides a quick overview of the key Python libraries data scientists should know for building generative apps, from text generation to human-AI chat interfaces and beyond.

For more on which Python tools to use for generative AI application building, check out our latest cheat sheet.

Best Python Tools for Building Generative AI Applications Cheat Sheet

There are many open source Python libraries and frameworks available that enable developers to build innovative Generative AI applications, from image and text generation to Autonomous AI.

Some highlights covered include OpenAI for accessing models like ChatGPT, Transformers for training and fine-tuning, Gradio for quickly building UIs to demo models, LangChain for chaining multiple models together, and LlamaIndex for ingesting and managing private data.

Overall, this cheat sheet packs a wealth of practical guidance into one page. Both beginners looking to get started with generative AI in Python as well as experienced practitioners can benefit from having this condensed reference to the best tools and libraries at their fingertips. The KDnuggets team has done an excellent job compiling and visually organizing the key information data scientists need to build the next generation of AI applications.

Check it out now, and check back soon for more.

More On This Topic

  • Data Cleaning with Python Cheat Sheet
  • The ChatGPT Cheat Sheet
  • RAPIDS cuDF Cheat Sheet
  • Streamlit for Machine Learning Cheat Sheet
  • GitHub CLI for Data Science Cheat Sheet
  • Docker for Data Science Cheat Sheet

Tenable Integrates Generative AI Capabilities Across Cybersecurity Platforms 

Cybersecurity firm Tenable has announced the launch of ExposureAI, new generative AI capabilities and services across the Tenable One Exposure Management Platform.

Tenable has also introduced Tenable Exposure Graph, a scalable data lake, Powered by Snowflake, that fuels the ExposureAI engine.

This unified data platform – representing more than 1 trillion unique exposures, IT assets and security findings (vulnerabilities, misconfigurations and identities) across IT, public cloud and OT environments – is the largest repository of contextual exposure data in the world and feeds all of Tenable’s Exposure Management products.

Prevention has long been a challenge for security teams. Conducting analysis, interpreting the findings and identifying what steps to take to remediate and reduce risk has traditionally been a time-consuming process that puts organisational security in reactive mode.

Nearly six in 10 (58%) cybersecurity and IT pros say the security team is too busy fighting critical incidents to take a preventive approach to reducing their organisation’s exposure.1 Furthermore, 73% of cybersecurity and IT pros believe their organisation would be more successful at defending against cyberattacks if they could devote more resources to preventive cybersecurity.

“For years, Tenable has used its market-leading vulnerability management data and applied AI techniques to help organisations prioritise vulnerabilities based on true risk to the business. AI is a part of our DNA. Now we’re using generative AI to put more power than ever in the hands of security teams to inform their exposure management programs and root out cyber risk wherever it exists,” said Glen Pendley, Chief Technology Officer, Tenable.

The post Tenable Integrates Generative AI Capabilities Across Cybersecurity Platforms appeared first on Analytics India Magazine.

Github Upgraded Push Protection for Developers and Organisations

GitHub’s efforts to stop security vulnerabilities

GitHub, had announced the update to enhance the security of secret codes for both individual developers and developer groups. The world’s largest platform for collaborative software development, had previously, enabled this feature only for certain individuals who had the means to ensure the protection of their confidential codes.

However, this has now been extended to all users, enabling them to safeguard their secret codes independently. This new approach guarantees the security of personal codes while eliminating concerns about reliance on others for protection.

Before, users could only make sure push protection was on for a specific repository, and they had to depend on administrators to change the settings. Also, organisations couldn’t measure the big impact of push protection. But now, things are changing.

Push protection is even better now with two new features that are being tested – push protection for users and push protection metrics for organisations. Now, users can keep their pushes safe on the whole GitHub platform just by adjusting their personal account settings. And the person in charge of an organisation, they can get information about how to stop secret leaks.

For team collaborations, GitHub offers valuable insights into the effectiveness of secret protection measures. This feature provides information about the number of detected secrets, those prevented, and the reasons some secrets may bypass the security measures. By offering a comprehensive view of code protection, GitHub assists in improving security practices.

Getting started with these updates is straightforward. Individual developers can enable secret protection within their GitHub settings, ensuring that every line of code they commit is safeguarded. For organisations, this security enhancement is available in the security overview section, enabling owners and managers to monitor push protection metrics. This comprehensive data insight aids in refining strategies to prevent secret leaks.

GitHub’s commitment to enhancing security measures is evident through these updates. By empowering developers to protect their codes and offering detailed metrics for organisations, GitHub aims to create a safer development environment for all users.

This update is welcome because of a new computer attack that uses tricky files from a tool called OpenBullet to target people who aren’t good at hacking. The goal is to put a harmful program on their computers that can steal important information.

A company called Kasada found this bad activity. They say the attackers are taking advantage of new hackers who don’t know much. They’re using them to do bad things.

OpenBullet is a tool used for testing websites, but here it’s used for bad stuff. It takes special files and mixes them with passwords to try and break into websites. It does this without showing anything on the screen, which makes it sneaky.

This can be a problem because it’s using new hackers to do bad things without them knowing. It’s important to be careful online.

The post Github Upgraded Push Protection for Developers and Organisations appeared first on Analytics India Magazine.

India’s Space Techs Don’t Get Enough Investments

India’s space ambitions are soaring to new heights, yet, a direct contrast between AI and space tech funding indicates that the country is not playing to its strength. In 2022, startup funding in the sector grew from $67.2 million in 2021 to $108.52 million—which is set to go up to $300 million in 2023. In contrast, AI startup investments reached $3.24 billion in 2022.

However, there are few players actively investing in India’s space vision. One entity at the forefront of these investments is Speciale Invest, an organisation that has been actively fostering the growth of space tech startups in India since its inception in 2017. The company behind space tech startups like Agnikul Cosmos, Astrogate Labs, Kawa Space, GalaxEye and Inspecity.

In an exclusive interview with AIM, Vishesh Rajaram, Managing partner at Speciale Invest spoke about the company’s investment, alongside India’s burgeoning space tech ecosystem at large and more.

Cultivating a Frontier of Innovation

Speciale Invest’s journey in space technology investment began in 2017, with an eye toward the burgeoning landscape of advanced science and technology ventures. The organisation embarked on a mission to cultivate a culture of innovation in the country, capitalising on India’s deep-rooted expertise in science and technology.

“ISRO such a powerful and highly qualified, department of science agency in the country that we felt that with their help, you could spurn an entire industry in this country. Additionally, right around covid the space policy came out, and was on the same line, suggesting that ISRO has strong competency and it should help expand India’s market share, which I think was 2%” said Rajaram, saying that the growth will be led by ISRO, IN-SPACe and the department of science and technology.

“We’ve got a reasonable 15-20% of our investment in the sector. The sector is only gonna get better,” said Rajaram. Speciale Invest believes that its role was not just to invest capital, but also to facilitate the growth of startups that are driving technological advancements in the space sector.

Rajaram said that the landscape was shaped by several trends, including the privatisation of the space sector globally, with companies like SpaceX and Amazon leading the way. He said that his firm identified a ripe opportunity for growth in India, as the country witnessed an increase in the number of satellites launched into space, alongside a reduction in the size and cost of these satellites and startups.

Investing in Innovation

The organisation made strategic investments in companies that demonstrated unique offerings in the space tech sector. One notable investment was in a company called Agnikul Cosmos, marking their initial foray into the sector. Cosmos was positioned to capitalise on the trends of satellite privatisation and accessibility, aligning with the organisation’s forward-looking approach.

Their investment portfolio expanded to encompass a range of cutting-edge technologies and services. One of their investments was in a company named India, which boasts a constellation of satellites with multiple sensors, offering unique capabilities within the country. Another investment was in Astrogate Labs, a company specialising in optical communication terminals for satellites. Kawa Space, yet another investee, focused on geo-data intelligence from the ground.

Challenges and Opportunities

While the growth potential of India’s space tech sector is immense, it’s not without challenges. Vishesh Rajaram, acknowledged that the sector is highly engineering-intensive, requiring specialised expertise. Founders and teams need to navigate complex technical and regulatory landscapes. However, the organisation sees this challenge as an opportunity to foster a deep pool of talent at the intersection of academia and industry.

He said that it is a challenging space but ISRO stands as a solid backbone.

“This is a very engineering-heavy sector. So this is not for anybody and everybody to come and start. These are the companies that will only come in from PhDs, coming from academia, or coming in at the intersection of academia and industry.” Rajaram said.

Collaboration and Growth

Collaboration between startups, academia, and government institutions has been instrumental in nurturing India’s space tech ecosystem. The Department of Space, along with entities like IN-SPACe, has played a crucial role in providing startups with access to talent, testing facilities, and policy frameworks. This collaboration has helped startups overcome hurdles and create synergies, propelling the entire sector forward.

“The good news is that the kind of support and talent that ISRO is providing to these startups is phenomenal. A lot of ex-ISRO scientists are also available and are committing time for these startups to sort of build-up and go. Otherwise, it’s not like running an e-commerce company. Definitely,” he added.

A Vision for the Future

Looking ahead, the VC firms envision exponential growth for India’s space tech sector. As per recent reports, there are around 140 space startups in India, each with unique propositions and capabilities. These startups are expected to raise substantial amounts of capital, both from venture capitalists and from government-supported projects. The sector’s potential market share is predicted to increase from 2% to 8% over the coming years.

During the conversation, Rajaram cited a report by Arthur D Little which highlights the shift in the global space sector towards more private sector involvement. India is also experiencing this change and could grow its $8 billion space economy to $100 billion by 2040, he said.

ADL labels this phase “Space 4.0,” evolving from early astronomy to collaboration. India’s space market grows at 4.3% CAGR, exceeding the global 2.2%. Its economy could hit $40 billion by 2040, but the report suggests aiming for $100 billion. Proposed growth areas include satellite-based internet, leading launch services, space mining, and manufacturing. Success could double India’s space GDP contribution to 0.5% and create 3 million jobs.

While India’s global space economy share is 2%, it could reach 4% at 9.2% CAGR (2022-2040). Government investment and private sector involvement will drive this, meeting launch and satellite service demands.

The report in line with Rajaram’s opinion indicates at growing domestic space startups via ISRO and INSPACe show private sector interest. Yet the report suggests that funding and technology challenges remain. A revised FDI policy is urged. China’s space activity exceeds India’s; global relationships and strategic shifts are needed for India to compete.

Nonetheless, there are some positive indicators. Google recently made its inaugural investment in India’s space-tech field in June. The $36 million Series B funding round was led by Google in Pixxel, a Bengaluru-based startup. Anand Mahindra has also backed Agnikul Cosmos.

In September 2022, Skyroot Aerospace secured a significant funding milestone in the space technology sector. The company successfully raised $50.5 million in its Series B funding round, which was spearheaded by Singapore’s GIC.

India’s journey into space tech represents a convergence of technological prowess, innovative thinking, and strategic investments. As India continues to advance its space ambitions, the foundations laid by organisations like Speciale Invest will play a pivotal role in shaping the future of the country’s space tech ecosystem. With a focus on innovation, expertise, and collaborative growth, India is poised to make a mark on the global stage of space technology.

The post India’s Space Techs Don’t Get Enough Investments appeared first on Analytics India Magazine.

OpenAI Might Go Bankrupt by the End of 2024

OpenAI Might Go Bankrupt by the End of 2024

When OpenAI filed for a trademark on ‘GPT’, we called it the beginning of the end of OpenAI, saying that a lot of people might eventually stop using the technology and move away. The company never really won the trademark but there is clear evidence that a lot of people are moving away from its GPT products.

To begin with, when ChatGPT website saw a user decline in June, when compared to May, it was asserted that it might be just because students are out of school. On the other hand, it could also have been because ever since the company released the ChatGPT API for users, people started building their own bots, instead of using the original offering.

Now, it’s July. The user base of ChatGPT has dropped even further. According to SimilarWeb, compared to 1.7 billion users in June, in July, there were only 1.5 billion users, which means a 12% decline month on month. But even this does not include the API usage.

According to a user on X, a major reason for this decline might be just because of API cannibalisation. Which means that most of the companies are prohibiting their employees from using ChatGPT for work, but are allowing them to use the API to leverage the large language model (LLM) in different workflows.

Well, there are other reasons

It is quite handsome of OpenAI to assume that the decline in users is just because people have started using the API to build their own products. Interestingly, the twist over here is the rise of open source LLM models. Meta’s Llama 2 in partnership with Microsoft, OpenAI’s father company, is allowing people to use the model for commercial purposes.

So, instead of going what OpenAI offers, which is a paid, proprietary, and restricted version, why would people not go for Llama 2, which is easily modifiable. Arguably, it is also a lot better in certain use cases when compared to GPT.

You can now test Llama 2 in less than 10 minutes.
Llama 2 is one of the most consequential developments of 2023. It's an open-source alternative to proprietary Large Language Models with a commercial license.
Llama 2 has every ingredient to become successful: it’s open-source,… pic.twitter.com/SQa9S9dSuK

— Santiago (@svpino) July 31, 2023

Another interesting thing to note is that, even though Sam Altman does not own equity in OpenAI, the company shifted from being a non-profit to profit way back. Though Altman might not care about the profits, the company does. But, OpenAI is not profitable yet. In May, its losses doubled to $540 million ever since it started developing ChatGPT.

To put OpenAI’s profitability in perspective, according to a recent report by Investopedia, it is too early for an AI leading company, which includes OpenAI, Anthropic, or Inflection, to head into the initial public offering (IPO) market. Moreover, the same report said that it takes at least 10 years of operation and earn around $100 million in revenue for the IPO to be successful.

Microsoft’s $10 billion investment in OpenAI is possibly keeping the company alive at the moment. But on the other hand, OpenAI projected an annual revenue of $200 million in 2023, and expects to reach $1 billion in 2024. Which seems to be a long bet since the losses are only increasing.

It might be true that OpenAI’s shift to a paid version might have been a lot of bucks for them. Probably, the projection of the expected revenue comes from the people buying the APIs and using the GPT-4 based chatbot, or other offerings such as DALL-E2. But the financial numbers on this are still unclear.

Even then, if this LLM focused company goes into IPO, they might be mostly bought out by bigger companies. Arguably, this might be one of the exit strategies for the investors. But nothing is clear here.

As for the employees, while a lot of them might be leaving to join rivals, the company is still hiring people and willing to pay them handsome salaries, and even expanding offices to London.

In December, Altman said that the cost of running this AI company and ChatGPT is “eye watering”, and thus later monetised it. According to a report, ChatGPT costs $700,000 per day to operate. All of this is going through Microsoft and other recent investors’ pockets, which might eventually empty them if it does not get profitable soon.

The Musk pressure

Earlier it was believed that the biggest competitors in the AI race against OpenAI must be Google or Meta, but now with Elon Musk’s xAI in the picture, the conversation has completely shifted. Musk is bullish on raging against OpenAI and to build a rival chatbot. Interestingly, his idea of building a “TruthGPT”, which is not as politically biased as ChatGPT, has been found interesting by a lot of people. He even bought 10,000 NVIDIA GPUs to get ahead.

Adding to all of this is the GPU shortage. Altman had said that the shortage of GPUs in the market is causing the company to not be able to train further models and improve them. Now that the company has filed for a trademark on ‘GPT-5’, it is clear that the company wants to train it. This in turn, has resulted in a huge decline in the quality of ChatGPT output quality.

So if OpenAI does not get more funding soon, the company might have to file for bankruptcy by the end of 2024 to acquire more NVIDIA GPUs that are coming in the second quarter of the year, and start training their models. Till then, competitions are coming up, losses are increasing, users are declining, lawsuits are piling up, and the quality is going down as well.

The post OpenAI Might Go Bankrupt by the End of 2024 appeared first on Analytics India Magazine.

Meta’s Double Helix & Farewell to Protein Folding

After abandoning its metaverse dream, Meta has laid off its protein folding team that built the revolutionary ESMFold or Evolutionary Scale Modeling for protein structure prediction, exactly two years ago. The 12-member team also created a comprehensive database of over 600 million protein structures.

The move indicates Meta’s increasing attention to generating revenue from commercial AI than in life sciences. During the second quarter of 2023, the company achieved its highest level of profitability since 2021. Regarding advertising, there was a notable 34% annual growth in the number of ad impressions displayed across their suite of applications. No wonder the Facebook owner is giving up on protein-folding ambitions.

As per a report by Financial Times, Yaniv Shmueli, a former research scientist and engineering manager at Meta AI who worked on ESMFold, mentioned that Meta had attempted to adjust its research approach in order to gain deeper insights into the development of advanced intelligence with potential business applications, as opposed to focusing solely on “curiosity projects”.

As a part of the major restructuring effort, Meta had a final round of layoffs, part of a plan to cut 10,000 jobs announced in May. The letting go of ESMFold is also a part of this. Previously, Meta laid off over 11,000 employees, bringing its workforce back to its mid-2021 level after a period of rapid expansion since 2020. The layoffs that have reduced the workforce from nearly 90,000 (mid-nov 2022) to about 65,000 employees.

For years, Meta’s managers have been promoted based on the size of the teams they build. However, in Mark Zuckerberg’s “year of efficiency”, promotions will now become less frequent. The “efficiency” is resulting in cost-cutting, organisational restructuring, and mass layoffs.

Meanwhile, Meta is going all-in for generative AI for sure.

Although its new decentralised social media platform Threads became an instant success, as AIM reported earlier, Zuck might be possibly using the data to train a more powerful LLM than Llama 2. Meta is also planning to unveil a range of AI-powered chatbots featuring unique personalities for platforms like Instagram and Facebook to enhance user interaction and o facilitate human-like conversations.

Read more: Protein Wars: It’s ESMFold vs AlphaFold

ESMFold Vs AlphaFold

AlphaFold, initially launched in 2018 by Google Deepmind, published its second version in 2020 and released an open-source version of its deep-learning neural network AlphaFold 2 last year.

Besides AlphaFold and ESMFold, Chinese biotech firm Helixon developed OmegaFold, Generate Biomedicines brought Chroma, Baker Lab brought in RoseTTAFold and RoseTTAFoldDiffusion, and the list goes on.

Meta’s ESMFold leverages a large-scale language model, focusing on evolutionary scale modelling while AlphaFold is built on the neural network-based model, achieving high accuracy for predicted multimeric interfaces and intra-chain accuracy.

Although ESMFold has a 60 times faster inference, enabling analysis of metagenomic protein structural spaces, particularly for sequences from natural environments, it is proved to be less accurate than AlphaFold.

ESMFold generates predictions using a single sequence input, leveraging the language model’s internal representations, while AlphaFold and others employ multiple sequence alignments and templates. It is good at working with atomic-level predictions, especially for low-perplexity sequences, with prediction speed being a major advantage.

Google Taking the Lead

Besides being the most cited paper of 2022, AlphaFold 2 won the CASP14 in 2020 and is regarded as the best protein-folding model. The collaboration with EMBL-EBI predicted the structure of a 200 times bigger protein database. Meanwhile, AlphaFold is finding several real-life applications including the prediction of protein structures of the COVID-19 outbreak—SARS-CoV-2, advancing the development of drugs for malaria, neglected diseases like Leishmaniasis, delivering gene therapy, treating antibiotic resistance, fighting climate pollution and much more. However, there is barely any public information on ESMFold’s use cases.

However, besides AlphaFold, when it comes to using generative AI in healthcare, it is safe to say Google is leading.

Their Med-PaLM 2, built upon a proprietary language model, PaLM 2, is a medical chatbot that answers medical questions and has been a fan favourite since its launch. It is being tested in healthcare institutions like the Mayo Clinic research hospital. Google Health, Google Deepmind and Google AI have unveiled Med-PaLM M, a large multimodal generative model that flexibly encodes and interprets biomedical data. It can handle various types of medical data, including clinical language, medical images, and genomics, and performs well on a wide range of tasks, all using the same set of model weights.

Nevertheless, this is not the first time that Meta disbanded a project by laying off the division. Earlier this year, the big tech announced 10,000 job cuts in its Metaverse division after suffering a $13.7 billion loss in 2022, putting turning their metaverse dreams into a nightmare. It is still blurry why Zuck decided to give up on the ESMFold team, especially considering the immense potential of protein folding models in touching human lives.

Read more: From Humble Beginnings to Scientific Stardom: Meet the Protein Prodigy from Bengal

The post Meta’s Double Helix & Farewell to Protein Folding appeared first on Analytics India Magazine.

Learn to create a ChatGPT AI bot with this course bundle

stack-social-chatgpt-ai-bundle

Improve your ChatGPT skills with this training bundle.

You may have experimented with ChatGPT, and found it to be a useful tool on its own for work, school, or personal projects. But if you take the opportunity to study even basic coding, you can expand what you can do with AI simply by customizing your own chatbot.

Start your coding education during this back-to-school sale and get the 2023 Ultimate AI ChatGPT and Python Programming Bundle for only $30 until August 13.

Code your own AI chatbot with Python

This AI programming bundle primarily focuses on cultivating your skills with Python. Courses start at the very beginning and may be most useful to learners with little coding experience. If Python is a new language to you, start with Python 3: From ZERO to GUI Programming. This course gives you nine hours of programming tips to help you work through the other eight Python courses. Those introduce new skills like PDF handling and data analysis. One even gives you the chance to program an escape room.

Courses are taught by Dr. Chris Mall, who has a masters degree in IT and a Ph.D in Computer Science. He also teaches a course that introduces you to the theory behind AI and shows you how to program a robot. From there, you can apply your skills in two courses that walk you through the process of crafting a ChatGPT AI bot. Even if you aren't a coding expert, these courses will show you how to use ChatGPt to generate new code for you.

86 hours of Python and AI training

Back-to-school sales doesn't just mean going back to a classroom. Take control of your own education and learn to code and build your own AI chatbot.

Until August 13 at 11:59pm Pacific, get the 2023 Ultimate AI ChatGPT and Python Programming Bundle for $30. No coupon needed.

ZDNET Recommends

ChatGPT answers more than half of software engineering questions incorrectly

Person using ChatGPT on a laptop

ChatGPT's ability to provide conversational answers to any question at any time makes the chatbot a handy resource for your information needs. Despite the convenience, a new study finds that you may not want to use ChatGPT for software engineering prompts.

Before the rise of AI chatbots, Stack Overflow was the go-to resource for programmers who needed advice for their projects, with a question-and-answer model similar to ChatGPT's.

Also: How to block OpenAI's new AI-training web crawler from ingesting your data

However, with Stack Overflow, you have to wait for someone to answer your question while with ChatGPT, you don't.

As a result, many software engineers and programmers have turned to ChatGPT with their questions. Since there was no data showing just how efficacious ChatGPT is in answering those types of prompts, a new Purdue University study investigated the dilemma.

To find out just how efficient ChatGPT is in answering software engineering prompts, the researchers gave ChatGPT 517 Stack Overflow questions and examined the accuracy and quality of those answers.

Also: How to use ChatGPT to write code

The results showed that out of the 512 questions, 259 (52%) of ChatGPT's answers were incorrect and only 248 (48%) were correct. Moreover, a whopping 77% of the answers were verbose.

Despite the significant inaccuracy of the answers, the results did show that the answers were comprehensive 65% of the time and addressed all aspects of the question.

To further analyze the quality of ChatGPT responses, the researchers asked 12 participants with different levels of programming expertise to give their insights on the answers.

Also: Microsoft's red team has monitored AI since 2018. Here are five big insights

Although the participants preferred Stack Overflow's responses over ChatGPT's across various categories, as seen by the graph, the participants failed to correctly identify incorrect ChatGPT-generated answers 39.34% of the time.

According to the study, the well-articulated responses ChatGPT outputs caused the users to overlook incorrect information in the answers.

"Users overlook incorrect information in ChatGPT answers (39.34% of the time) due to the comprehensive, well-articulated, and humanoid insights in ChatGPT answers," the authors wrote.

Also: How ChatGPT can rewrite and improve your existing code

The generation of plausible-sounding answers that are incorrect is a significant issue across all chatbots because it enables the spread of misinformation. In addition to that risk, the low accuracy scores should be enough to make you reconsider using ChatGPT for these types of prompts.

Artificial Intelligence