OpenAI Finally Releases Code Interpreter for ChatGPT Plus Users

Continuing the momentum, OpenAI has been busy with a string of announcements this week. After initially announcing the Code Interpreter plugin a few months ago on Alpha mode, the company will finally make it available for ChatGPT Plus users on Beta mode from next week. The code interpreter facilitates a multitude of functions on ChatGPT including analysing data, creating charts, uploading and editing files, performing math, and even running codes: opening the doors to data science use cases.

Code Interpreter becoming available for all ChatGPT Plus users over the next week. Really amazing for any data science use case: https://t.co/hpel8xKyEg pic.twitter.com/Fd3SnPvVmT

— Greg Brockman (@gdb) July 6, 2023

With its availability on Beta mode, ChatGPT Plus users will get access to a plugin that might even make a data scientist obsolete. The plugin can help with many of the common workflows of a data scientist including tools of visualisation, trend analysis and data transformation. Users are thrilled to have access to the plugin with people sharing their tips on how to utilise them. With the initial announcement of 12 plugins in April, there are over 200+ ChatGPT plugins on the platform at present.

Safety Concerns

OpenAI’s safety problems remain a perpetual concern. When ChatGPT plugins were first released, concerns on security of data had popped up. Jailbreaks and prompt engineering attacks have been a concern, however, OpenAI continues to work on plugins.

Data leak has always been a concern for OpenAI. With multiple incidents of data privacy issues in the past, the company recently faced another one which pushed them to temporarily stop a feature. This week, OpenAI retracted their latest browser feature (using Bing) on mobile within two weeks of its release, owing to data leak. Similar, to Code Interpreter, the browse feature was also on beta mode.

However, the momentum continues. OpenAI has been delivering a streak of announcements this week. The company announced its plans to build a team that will work towards achieving superalignment in the next four years. Yesterday, OpenAI announced general availability of their GPT-4 API.

The irony of not working on safety but concentrating on new feature announcements and future plans of achieving AGI will continue for OpenAI.

The post OpenAI Finally Releases Code Interpreter for ChatGPT Plus Users appeared first on Analytics India Magazine.

Alibaba Unveils AI Image Generator to Challenge Midjourney and DALL-E

Alibaba cloud

Alibaba Group Holdings Ltd on Friday announced its artificial intelligence image generator at the prestigious World Artificial Intelligence Conference 2023, as it ramps up its offerings in the fast growing AI sector.

This cutting-edge AI image generation model, called Tongyi Wanxiang (which translates to ‘tens of thousands of images’), is now ready for beta testing and available to enterprise customers in China.

The newly introduced image generator by Alibaba, is set to enter the competitive landscape alongside renowned rivals such as OpenAI’s DALL-E and Midjourney Inc’s MidJourney. These U.S.-based competitors have garnered significant global popularity and recognition.

Jingren Zhou, CTO of Alibaba Cloud Intelligence, expressed his excitement about Tongyi Wanxiang, stating that it marks a significant milestone in their journey to develop advanced generative AI models.

“With the release of Tongyi Wanxiang, high-quality generative AI imagery will become more accessible, facilitating the development of innovative AI art and creative expressions for businesses across a wide range of sectors, including e-commerce, gaming, design and advertising,” he added.

Alibaba Cloud also unveiled ModelScopeGPT, a powerful framework designed to harness the power of Large Language Models (LLMs) available on the platform. Alibaba Cloud launched its LLM named Tongyi Qianwen in April, and it plans to integrate the LLM across Alibaba’s various businesses in order to improve the user experience in the near future.

In the wake of OpenAI’s successful ChatGPT chatbot, numerous prominent Chinese tech companies are taking decisive steps to launch their own AI products and services. Earlier In April, Alibaba launched its own AI chatbot, Tongyi Qianwen, to rival ChatGPT.

The post Alibaba Unveils AI Image Generator to Challenge Midjourney and DALL-E appeared first on Analytics India Magazine.

The Biggest Winner From Threads: LLaMA

The Biggest Winner From Threads: LLaMA

Threads is all the hype on the internet right now. But arguably it is for all the wrong reasons. People have been migrating from Twitter to Threads and comparing its features, but the real reason to launch the text-based social media platform is possibly to acquire data and build another large language model (LLM). Ergo, LLaMA, or build ‘Super LLaMA.’

A month ago on the Lex Fridman Podcast, Mark Zuckerberg, the founder of Meta, said that the company is working on another version of LLaMa, its open source LLM. In the same podcast, Zuckerberg also spoke about how the company is working on an alternative to Twitter, codenamed P-92. Within a month, it was launched as Threads, clearly taking inspiration from Elon Musk’s Twitter. However, Musk is suing Meta for copying its platform.

Ironically, Zuckerberg tweeted after 11 years, that too with just a meme.

pic.twitter.com/MbMxUWiQgp

— Mark Zuckerberg (@finkd) July 6, 2023

Zuckerberg has also been praising the approach of Mastodon and Bluesky for building a decentralised platform for users. He further adds how a platform like Wikipedia can be driven using the community. Interestingly, he also said that the next version of LLaMa would be possibly trained on all the services that Meta offers, which now includes the text-based social media platform Threads.

Decentralised for the win

Elon Musk, the boss of Twitter, has been claiming for a long time that he wants to build an alternative to OpenAI’s ChatGPT. For that, it is speculated that he might leverage Twitter data to make it more (or less) aligned. But there has been no news about something like this even being made ever since Musk said that he was roping in a DeepMind researcher to build TruthGPT.

Now, Zuckerberg saw that there is something powerful in the idea of training a chatbot based on a social network’s data. The goal of building an alternative to Twitter has possibly now turned into building an alternative to OpenAI. The best thing that Meta can do here is keep the technology open source, which Zuckerberg and even Yann LeCun, the Meta AI chief, has been advocating for all this while.

Now that Zuckerberg has Threads, he can collect and hoard data from users and use it for building generative AI models. Interestingly, the bid to open source AI models is continued, and this time ‘Super LLaMa’, would be decentralised. Which sounds like Zuckerberg is giving the control of its data back to the users, but there is possibly a trick up its sleeve.

Another generative AI image of Mark threading the #threads pic.twitter.com/GaxpygGKas

— AB (@emeetab) July 6, 2023

To take the case for OpenAI’s GPT-3.5 and GPT-4, the model is built by scraping the internet, a lot of which is Wikipedia pages, which is open to the public. Now, if Meta decides to make Threads decentralised, it can be one of two things – either the data would be completely publicly accessible or would be accessible only to Meta.

In either cases, Meta would be able to avoid all legal issues of training on public data, or in this social media platforms, by simply saying that the platform is decentralised. The current version of LLaMa is only trained on web scraped data, and not on Meta’s services. However, not it seems like Meta might actually do it.

Meta is in the AI race, not the social media race

Instagram, WhatsApp, or Facebook, have a huge amount of data which is available in text, video, or images format. This could be beneficial for building a multimodal AI in the future, but when it comes to the legality of using data, the questions would always remain.

While OpenAI is trying to solve so-called alignment problems, Meta is catching up. By betting big on open source, Zuckerberg is also putting away its responsibility of aligning AI to everyone. Clearly, the plan is wild but might work in Meta’s favour.

On the other hand, Zuckerberg knows that even with the current capabilities of LLaMa, it is still “an order of magnitude smaller than what OpenAI and Google are doing”. Possibly, Musk has now given the idea to Zuckerberg to build something on a text-based social media platform. So even when Meta might be four years behind OpenAI in terms of AI, Threads might make it possible for it to catch up, and build something to kill ChatGPT, before Musk does. Musk needs to catch up.

Interestingly, Zuckerberg has already started restricting accounts on Threads, something that Musk said is one of the main reasons to acquire Twitter, and also build a rival to OpenAI’s woke chatbot. So even if Meta builds a chatbot, it would not be similar to what Musk wants to build.

Moreover, Meta is already spearheaded ahead of Twitter and even ChatGPT when it comes to users. Threads crossed 10 million users within 7 hours of its launch whereas ChatGPT took 7 days to cross 1 million users. The fishy part is only the privacy policies around Threads. Users currently cannot even delete their accounts without deleting Instagram accounts. It seems like a forced way to keep users on the platform, instead of offering them something that Twitter does not.

The post The Biggest Winner From Threads: LLaMA appeared first on Analytics India Magazine.

OpenAI Announces General Availability of GPT-4 API

OpenAI Releases ChatGPT Plugins, An ‘iOS App Store’ Moment in AI

OpenAI is making GPT-4 accessible to all paying API customers. They have also announced deprecation for older models of the Completions API, and that those models will no longer be developed or supported. OpenAI recommends users to transition to the Chat Completions API, with improved capabilities.

In March, the company introduced ChatGPT API and now millions of developers can access the GPT-4 API. Currently, existing API developers with a history of successful payments can access GPT-4 API with an 8K context. By the end of the month, new developers will get access to the same, after which OpenAI will start raising rate-limits based on compute availability.

OpenAI is making GPT-3.5 Turbo, DALL.E and Whisper APIs generally available. They are also working on enabling fine-tuning for both GPT-4 and GPT-3.5 Turbo, which will allow developers to customise and train models for specific tasks. This is expected to be available later this year.

Push for Chat Completion API

Chat Completions API which was introduced in March accounts for 97% of OpenAI’s API GPT usage. Completions API which was introduced in June 2020, allowed users to interact with language models using freeform text prompts. However, with continuous learning, OpenAI discovered that using a more structured prompt interface leads to improved results, hence the shift to a chat-based paradigm.

Through structured interface and multi-turn conversation capabilities, developers were able to build conversational experiences and complete tasks via Chat Completions API. There is also increased security with reduced risk of prompt injection attacks.

OpenAI will work on future models and product improvements on Chat Completions API, and from January 2024, the company will remove all the older completion models which will be replaced with the below ones.

Source: OpenAI Blog

OpenAI Roadblocks

OpenAI has been facing a few hiccups in the last few days. GPT-4 API availability comes amidst the news of ChatGPT seeing a decline in numbers. ChatGPT witnessed a decline of 9.7% in June compared to May. Unique visitors to the website dropped by 5.7%, and there was a 8.5% decrease in the amount of time visitors spent on the website. A few days ago, OpenAI retracted their browse feature with Bing that was released two weeks ago from the ChatGPT app, owing to data leakage.

The post OpenAI Announces General Availability of GPT-4 API appeared first on Analytics India Magazine.

Battle of the Ts: Is Threads Any Match against Twitter?

The unveiling of Threads, a new app by Instagram, brings to you what Insta could not — long-form content for ‘text updates and public conversations’. Wait, what is Twitter then? Is Zuckerberg simply copying what other platforms are getting right or has Meta really thought through this one?

Mark Zuckerberg :-
•Copied Reels Feature From Tik Tok .
•Copied Story Feature From Snapchat
•Copied Paid Blue Tick Idea From Elon Musk.
•Copied Entire Twitter App And Made #Threads . pic.twitter.com/iEfiUfSxD7

— Don Pappi (@_ngatia_) July 6, 2023

With 2.35 billion monthly active Instagram users, the intention of carrying forward all of them to the new Threads app may be an expectation, however, why would users from a photo-sharing platform want to download an additional app for conversations when you have Twitter? Has the Zuckerberg vs Musk fight finally broken the cage?

What’s New?

Sparking a meme-fest and immediate comparison with Twitter, Threads is growing nonchalantly. As per reports, in less than seven hours of its launch, threads have crossed 10M sign ups. It crossed two million in the first two hours. In 24 hours, the count has reached 30 M. The Thread app allows users to post up to 500 characters long posts and can include links, photos, and videos as long as 5 minutes. Though twitter has a limit of 280 characters, twitter blue users has a higher limit of 10,000.

Threads also lacks many of the features available in Twitter such as direct messaging, search and hashtags. Currently, users can see a feed of posts recommended by the app.

Threads also offer additional safety features where users are given the power to filter out restricted/unpleasant words. A feature that is not available on Twitter.

Source: Instagram Blog

While Meta has been calling out safety features, Threads is not free of roadblocks. Threads app is not available in the European Union owing to uncertainty over personal data use. Interestingly, Jack Dorsey, also seemed to rally on the data privacy issues. He shared a screenshot of the data that Meta will have over users.

All your Threads are belong to us https://t.co/FfrIcUng5O pic.twitter.com/V7xbMOfINt

— jack (@jack) July 4, 2023

The company has also placed a bizarre clause for Instagram users. As per their Supplemental Privacy Policy, a user cannot delete their Threads profile without deleting their Instagram account – an obvious lock-in mode to retain customers.

The Other Players

Social platforms such as Mastodon and Bluesky are worth remembering on this occasion. Jack Dorsey’s decentralised platform Bluesky uses an open protocol which serves as a bridge to connect different networks, allowing content to be shared and accessed easily. However, decentralisation has its fair share of problems including zero moderation that can lead to a growing number of hate groups, spammers or criminals.

Mastodon, a microblogging social network works on a set of decentralised servers that are open-source and running free. The platform is laden with security concerns. While it uses basic encryption for direct messages between users, end-to-end encryption isn’t available, which can expose data to server admins.

Meta’s Meandering Vision

It would be apt to call the latest platform Threads 2.0 as this is not the first time Meta has introduced the same. In 2019, Threads was launched as an app that was meant to be used alongside Instagram. It focused on sharing updates and connecting with friends, but the app did not become popular owing to inconvenient ways to read and respond to messages. The app was therefore shut in December 2021. To fix issues and maybe come up with a convenient platform, Meta came up with the best solution possible: copy from Twitter.

Zuckerberg is infamous for copying features from other social platforms and replicating it on his platform. Instagram features such as Stories and Reels are a direct copy of SnapChat and Tik Tok respectively.

The Reigning Champion

But the question remains. Why would a Twitter user who has built a massive list of followers shift to another platform which doesn’t offer all the features available in the former platform? Instagram a photo and video sharing app caters to an audience that wants to consume visual content. To convert that audience to a text-based forum seems tricky. Twitter, having 373 million users, is not just a text sharing platform but is synonymous for news, exclusive announcements, conferences via Twitter Spaces and many other features- none of which is available on Threads.

While some have hailed the new app a success, including Mark Cuban and Lex Fridman, others have called it an ‘only-algorithmic timeline’ with none of the good content as that on Twitter. With the current features, Threads is far from reaching Twitter status. The new app may be successful in onboarding Insta users to their platform but has nothing new to offer to the loyal Twitteratis.

As a challenge to platforms that may try to imitate Twitter, Linda Yaccarino, CEO of Twitter, termed the platform “irreplaceable.”

On Twitter, everyone's voice matters.
Whether you’re here to watch history unfold, discover REAL-TIME information all over the world, share your opinions, or learn about others — on Twitter YOU can be real.
YOU built the Twitter community. 🙏👏 And that's irreplaceable. This…

— Linda Yaccarino (@lindayacc) July 6, 2023

The post Battle of the Ts: Is Threads Any Match against Twitter? appeared first on Analytics India Magazine.

This Indian Startup is Going to Space

This Indian Startup is Going to Space

In the vast expanse of the unknown, where the limits of human exploration meet the boundaries of the universe, a new player has emerged, promising to redefine the space industry. Erisha Space, a startup based in India, is embarking on a mission to revolutionise space technology and usher in an era of cost-effective satellite solutions.

In an exclusive interview with AIM, Dr Darshan Rana, managing director and chairman of Erisha Space, said that instead of a competitive approach, Indian space tech startups are taking a collaborative approach within the industry.

Founded a year ago in New Delhi and now operating out of Bengaluru, Erisha Space has been leveraging multi-mission satellite sensors, ground instruments, SCADA, and socio-economic data to develop mathematical models for tracking and monitoring large-scale changes in the environment for tackling climate change.

“Erisha Space has developed the Satellite System Platform (SSP) with an aim to provide low-cost solutions to companies in agriculture, e-mobility, aerospace, and maritime and defence tech sectors,” said Rana. Erisha’s ambitions extend far beyond just satellites. The company envisions a comprehensive ecosystem that includes upstream satellite production, downstream ground systems, web GIS, and data products.

Last month, the company announced its plan to launch a satellite equipping AI-based image processing capabilities for environmental analysis and sustainable development. The satellite, set to be launched in 2024, will boost the company’s remote vision capabilities by GIS and photogrammetry in various domains.

“By integrating AI and ML algorithms into our offerings, we aim to dramatically reduce costs, making remote sensing solutions more accessible to a wide range of industries, including agriculture, oil and gas, and defence,” added Rana.

Erisha Space’s team comprises various experienced players in the industry such as Debaddata Mishra, the director and COO of Erisha Space, who has also worked as a senior scientist at ISRO’s Gaganyaan Project.

Collaboration over competition is what Indian space industry needs

Driven by a collaborative approach, Erisha Space and other space tech startups in India work together rather than engaging in cutthroat competition. Each company focuses on different segments, such as satellite production, launch vehicles, components, or software development, creating a symbiotic relationship that benefits the entire industry.

Even then, what sets Erisha Space apart from others is its holistic approach to space technology. “Unlike other companies focused solely on satellite production or launching, Erisha is involved in every aspect of the process,” said Rana. The startup designs and produces satellites in-house, develops ground systems applications, and analyses the data collected by their satellites, giving them the edge over others.

“I can say this is the collaborative approach because there is no competition as of now in India,” emphasised Rana. Some companies are developing satellites, other companies are developing launch vehicles and components, while others are developing software. “Companies like us are coming into all segments to act as supporters. We are working on the last segment so we can support agriculture, defence, oil and gas,” Rana explained his Erisha Space’s position in the spacetech landscape.

Erisha Space’s technological innovations lie in their satellite designs. They are pioneers in the development of nano and micro-satellites with enhanced resolution capabilities. These miniaturised satellites can be modified and controlled remotely, allowing for real-time adjustments and reducing latency in data delivery. This breakthrough technology enables Erisha Space to offer comprehensive solutions that include high-quality imagery and advanced software analytics to their customers, all at a fraction of the cost of traditional satellite systems.

Government and future plans

In April, the announcement of the Indian Space Policy 2023 made the space available for private sectors. “This strategy is undoubtedly highly encouraging and beneficial for space companies like ours, who are attempting not only to develop low-cost space technologies, but also to make space technology accessible, acceptable, and affordable to society,” said Rana. With this approach, NGEs can now use ISRO’s test facilities and R&D expertise for a minimal user fee, which are both expensive and time-devouring. “As a startup, we cannot afford such investments,” added Rana.

This highlights how like any space startup, Erisha Space faces its fair share of infrastructure challenges. “The industry is still relatively new, and the lack of infrastructure and readily available technology components pose obstacles,” said Rana.

While India is just beginning to tap into the potential of the private space sector, they are learning valuable lessons from countries like the US. The Indian government also planned to allow 100% foreign direct investment (FDI) in the space sector, paving the way for big-tech companies to invest in India. Rana said that the Indian government could further support the growth of the space tech startup ecosystem by introducing incentives and policies that encourage innovation and attract more investment.

Relying on third-party organisations for manufacturing and launch services also presents difficulties. Additionally, securing adequate funding for their ambitious projects remains a constant challenge. Despite these obstacles, Erisha Space is determined to overcome them by developing most of their components and subsystems in-house, collaborating with experienced scientists and space agencies, and exploring fundraising options.

Looking to the future, Erisha Space aims to develop a data analytics platform that combines data from satellites, airborne sources, and ground stations. By leveraging AI and ML processing methods, Erisha Space intends to offer comprehensive monitoring systems for agriculture, oil and gas, defence, infrastructure planning, agronomics, and mapping.

As Erisha Space progresses towards its goals, they have set their sights on launching an SSLV (Small Satellite Launch Vehicle) by 2026. This launch vehicle will have a payload capacity of 1,000 kg. Additionally, Erisha is actively researching reusable satellite technology to further enhance the efficiency and cost-effectiveness of their systems, something which ISRO achieved just a few months back with RLV LEX.

It is clear that this startup is here to stay and its collaborative approach will make its space defined in the Indian space ecosystem.

The post This Indian Startup is Going to Space appeared first on Analytics India Magazine.

How To Access Google Bard Quickly (Step-by-Step Guide)

The Google Bard logo connecting to many different ways of accessing it.
Illustration: Andy Wolber/TechRepublic

Every time you want to access Google Bard, you could enter the site URL: https://bard.google.com. Once there, type or tap the microphone and talk to enter your prompt. But with a little configuration, you can configure your browser to open Bard automatically or access it with a single click or tap.

Google Bard provides a chat-style experience that allows for a series of prompt-and-response interactions, unlike a conventional Google search, which responds to a single string of keywords. You may find a few Bard prompts to be more useful than a series of Google searches; if so, it makes sense to streamline your setup for fast access to Google Bard. In this tutorial, I detail six ways to simplify accessing Google Bard. Keep in mind that Bard is an experiment and may sometimes provide inaccurate information.

For the configurations to work, you’ll need a Google account with access to Bard in one of the more than 180 countries where Bard is available. If your account is managed by a Google Workspace administrator, your admin will need to allow access. After you sign in to Bard, the steps suggested below let you access Bard with a tap, a click or with every search.

Jump to:

  • How to add a bookmark to Bard in Chrome
  • How to add a home screen link to Bard on Android
  • How to add a home screen link to Bard on iPhone or iPad
  • How to configure the home icon to open Bard in Chrome
  • How to set Bard as your Chrome browser startup page
  • How to add an extension to access Bard next to Google Search

How to add a bookmark to Bard in Chrome

To add a Bard bookmark in Chrome, follow these steps:

  1. Open Chrome and go to https://bard.google.com.
  2. Select the star (Figure A, top).
  3. If desired, modify either the bookmark name or folder. In most cases, you will want to leave the default of Bard and the standard Bookmarks Bar folder selected.
  4. Select the Done button (Figure A, bottom).

Figure A

While at https://bard.google.com, select the star (top), then select the Done button to add the site to your bookmarks.
While at https://bard.google.com, select the star (top), then select the Done button to add the site to your bookmarks.

Optionally, select Bookmarks | Bookmark Manager, then select and drag the Bard bookmark to modify the bookmark location in the list.

How to add Bard to the home screen on Android

On an Android device with Chrome installed, you can create a home screen icon link to Bard.

  1. Open Chrome and go to https://bard.google.com.
  2. Select the More menu (the three dots in a vertical row, as shown in Figure B, left).
  3. Select Add To Home Screen (Figure B, middle).
  4. If you wish, modify the display name.
  5. Select Add (Figure B, right).
  6. Select Add To Home Screen (Figure C, left). The system will create an icon of a Chrome link to Bard on your home screen (Figure C, right). Tap the icon to open Chrome and go to Bard.

Figure B

Use Chrome on an Android device to add a link to Bard on the home screen.
Use Chrome on an Android device to add a link to Bard on the home screen.

Figure C

Tap the Add To Home Screen button to create the home screen icon for Bard (left). Then, tap the icon to open Bard.
Tap the Add To Home Screen button to create the home screen icon for Bard (left). Then, tap the icon to open Bard.

How to add Bard to the home screen on an iPhone or iPad

On an iPhone or iPad, you may create a home screen link to Bard with Safari.

  1. Open Safari, and go to https://bard.google.com.
  2. Select the Share icon (the arrow pointing upward out of a square, as shown in Figure D, left).
  3. Scroll down through the options, and select Add To Home Screen (Figure D, center-left).
  4. The home screen bookmark will be named Bard by default — modify it if you wish.
  5. Select Add (Figure D, center-right).
  6. The system will add the link to an available location on your iPhone or iPad home screen (Figure D, right). Tap the icon to open Safari and access Bard.

Figure D

Use Safari to add a home screen link to Bard on iPhone or iPad.
Use Safari to add a home screen link to Bard on iPhone or iPad.

How to configure the home icon to open Bard in Chrome

You may configure Chrome so the home button opens to Bard, although this configuration may be less commonly used than a conventional bookmark or home screen link. To configure this, follow these steps:

  1. Open Chrome, and then go to chrome://settings. (Alternatively, on some systems, you may access this from the Chrome or More | Settings menu, as shown in Figure E, left.)
  2. Select Appearance (Figure E, right).
  3. Set the slider next to the Show Home button to the right.
  4. Select the option to enter a custom URL, and enter https://bard.google.com, as shown in Figure E, right.

Once configured, select the home button to access Bard.

Figure E

Configure the Chrome browser home button to open Bard.
Configure the Chrome browser home button to open Bard.

How to set Bard as your Chrome browser startup page

You also may choose to open to Bard when first starting Chrome. To add Bard as one of your startup pages, follow these steps:

  1. Open Chrome, and then go to chrome://settings. Alternatively, on some systems you may access this from the Chrome or More | Settings menu, as shown in Figure F, left.
  2. Select On Startup (Figure F, right).
  3. Choose Open A Specific Page Or Set Of Pages.
  4. When prompted, enter the URL for Bard https://bard.google.com/.

Whenever you start Chrome, Bard will be one of the pages automatically opened.

Figure F

Configure Chrome to open Bard whenever you start the browser.
Configure Chrome to open Bard whenever you start the browser.

How to add an extension to access Bard next to Google Search

If you want to display Bard-style responses every time you conduct a Google search, you’ll need to install a third-party Chrome extension as there is no official Google setting to enable this as of early July 2023. For example, to install the Bard For Google extension, follow these steps:

  1. In Chrome, open the Bard For Google extension.
  2. Select the Add To Chrome button (Figure G, top).
  3. If you agree with the displayed terms, select Add Extension (Figure G, bottom). The Bard For Google extension will be installed. (Note: If you are concerned about security, after installation, you might manage the extension’s permissions to allow it access only to two sites: https://bard.google.com and https://google.com/.)

Figure G

Install a third-party extension to show a Bard response alongside each Google search.
Install a third-party extension to show a Bard response alongside each Google search.

Bard For Google automatically opens a new Google search for the word “cat” to demonstrate how the extension works: The extension displays a Bard for Google response within an inset box, alongside Google search results (Figure H). Select the Let’s Chat button to enter an added prompt for an additional response. This ability to chat to refine or further explore a topic often may be more useful than a one-off Google result.

Figure H

Bard for Google not only displays a prompt response but also offers a ChatGPT response option, as shown here in the inset to the right of Google search results. In both cases, you may need to sign in to an active account with each respective service before the extension can display a response.
Bard for Google not only displays a prompt response but also offers a ChatGPT response option, as shown here in the inset to the right of Google search results. In both cases, you may need to sign in to an active account with each respective service before the extension can display a response.

Mention or message me on Mastodon (@awolber) to let me know what tweaks you have implemented to rapidly access Google Bard or other modern chatbot systems.

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more.

Delivered Tuesdays and Fridays Sign up today

TCS to Train 25,000 Engineers on Azure Open AI 

TCS on Thursday announced plans to significantly scale its Azure Open AI expertise and plans to get 25,000 associates trained and certified on Azure Open AI to help clients accelerate their adoption of this powerful new technology.

The company also plans to launch its new Generative AI Enterprise Adoption offering on Microsoft Cloud to help customers jumpstart their generative AI journey. This initiative aims to assist clients in improving customer experiences, introducing new business models, increasing revenue, and boosting productivity. By leveraging TCS’ extensive knowledge and resources, this offering helps organizations harness the power of Generative AI for their overall growth and success.

TCS is a member of Microsoft’s AI Council, has earned a Partner Designation in Data and AI, and has obtained Microsoft specializations in AI and machine learning on Azure and analytics on Azure.

“With TCS Generative AI Enterprise Adoption, our joint customers can unlock new growth opportunities and embark on an exciting journey of innovation – guided by our AI expertise and in-depth knowledge of Microsoft Cloud.” said Siva Ganesan, Head, Microsoft Business Unit, TCS.

Recently, TCS partnered with Google Cloud for offering generative AI services for its customers. This included offering Google’s generative AI services through Vertex AI, Generative AI Application Builder, and Model Garden, paired with TCS’ own solutions.

TCS is catching up now and is focusing on developing AI-powered products and platforms. Additionally, TCS plans to collaborate with global academic partners and startup ecosystems to invest in research areas crucial for the future. The company wants to invest in energy and supply chain through AI transitions.

The post TCS to Train 25,000 Engineers on Azure Open AI appeared first on Analytics India Magazine.

Top 7 Generative AI Courses by Andrew Ng

When it comes to courses that actually add value to your career, Andrew Ng, founder and CEO of DeepLearning.AI, is the first and most credible name that pops into our mind. His courses are comprehensive, covering an extensive range of topics in generative AI, including diffusion models, generative adversarial networks (GANs), and variational autoencoders (VAEs). What sets his offerings apart is the collaborative effort with industry giants such as AWS and OpenAI, which further reinforces their credibility.

Remarkably, these courses are provided completely free of cost and can typically be completed within one to two hours, making them easily accessible and time-efficient.

Understanding the importance of generative AI at this hour, we have curated a concise list of 7 courses provided by him in this space.

Generative AI with Large Language Models: Enrol in this course to gain a comprehensive understanding of generative AI’s lifecycle using LLMs and the underlying transformer architecture. Learn effective utilisation of LLMs for diverse tasks, model selection, and appropriate training techniques. Instructors include Andrew Ng, Antje Barth (principal developer advocate at AWS), Chris Fregly (principal solutions architect at AWS), Shelbee Eigenbrode (principal solutions architect at AWS), and Mike Chambers (developer advocate at AWS).

Check it out here.

LangChain: Chat With Your Data!: In the latest addition to Ng’s courses and instructed by Harrison Chase, co-founder and CEO of LangChain, the course focuses on Retrieval Augmented Generation (RAG) and advanced chatbot building. Topics that will be covered include document loading, splitting, vector stores, retrieval techniques, question answering, and chatbot development. Python developers keen on leveraging these technologies will find the course valuable.

Click here to enrol.

LangChain for LLM Application Development: This is another course with Chase that will teach you how to apply LLMs to data, build personalised assistants and chatbots, and utilise agents, chained calls, and memories for enhanced LLM utility. The course covers models, prompts, parsers, memory implementation, chain construction, and leveraging LLMs for question answering over documents. It also explores LLMs as reasoning agents. It is a one-hour beginner-friendly program that requires basic Python knowledge.

Tap here to register.

How Diffusion Models Work: This intermediate-level course taught by Sharon Zhou, CEO and co-founder of Lamini explores how to build and optimise diffusion models. In the course, you will learn about the diffusion process, how to build neural networks for noise prediction, and how to enhance image generation with contextual information. You will also gain practical coding skills and hands-on experience with creating personalised diffusion models. By the end of the course, you will have a solid foundation in diffusion models and be able to explore them for your own applications. Prior knowledge of Python, TensorFlow, or PyTorch is recommended.

Click here to sign up.

ChatGPT Prompt Engineering for Developers: DeepLearning.AI collaborated with generative AI leader OpenAI for this course where Isabella Fulford, a technical staff member at the ladder, will take you through how to use LLMs effectively to build powerful applications. The course covers prompt engineering, LLM API usage for summarisation, inference, text transformation, and expansion. It also emphasises crafting successful prompts, systematic prompt development, and creating personalised chatbots.

Get started by choosing this option.

Building Systems with the ChatGPT API: Again in partnership with OpenAI and co-led by Fulford and Ng, this program teaches the automation of complex workflows using chain calls to a powerful language model. Topics covered include interactive prompts, Python code utilisation, and customer service chatbot development. Practical applications encompass query classification, safety evaluation, and multi-step reasoning. The course offers hands-on examples, Jupyter notebooks, and follows beginner-friendly practices for maximising LLM model performance responsibly. Suitable for those with basic Python knowledge and ML engineers seeking prompt engineering skills for LLMs.

Make the move and click here to join.

Mathematics for Machine Learning and Data Science Specialisation: Even before generative AI became a buzz word and understanding the importance of mathematics in this sector, Ng introduced this course to give people an intuitive understanding of AI’s most crucial maths concepts like linear algebra, calculus, and probability. Led by Luis Serrano and co-created by Ng alongside Anshuman Singh, Magdalena Bouza, and Elena Sanina.

Enrol for the course for free here.

Besides these short courses, Ng also provides specialised courses on more specific topics like AI for medicine, TensorFlow practices, ethical AI and more. You can find more information about them here.

The post Top 7 Generative AI Courses by Andrew Ng appeared first on Analytics India Magazine.