ChatGPT’s new app comes out of the gate hot, tops half a million installs in first 6 days

ChatGPT’s new app comes out of the gate hot, tops half a million installs in first 6 days Sarah Perez @sarahintampa / 8 hours

Despite being U.S.- and iOS-only ahead of today’s expansion to 11 more global markets, OpenAI’s ChatGPT app has been off to a stellar start. The app has already surpassed half a million downloads in its first six days since launch, according to a new analysis by app intelligence provider data.ai. That ranks it as one of the highest-performing new app releases across both this year and the last, topped only by the February 2022 arrival of the Trump-backed Twitter clone, Truth Social.

As consumer demand for AI chatbots heated up, other third-party apps calling themselves “ChatGPT” or “AI chatbot” have filled the App Store. While many of these were essentially fleeceware, trying to trick consumers into paying for expensive subscriptions to access their AI, a combined group of top apps still managed to pull in millions in consumer spending. This competitive landscape among AI chatbot apps could have created a tougher market for an official ChatGPT app to gain traction. But as it turns out, that was not the case.

OpenAI’s ChatGPT app outperformed most of its rivals, including other popular AI and chatbot apps as well as Microsoft’s apps, Bing and Edge, which offered some of the first significant third-party integrations of OpenAI’s GPT-4 technology.

Though Bing and Microsoft Edge certainly benefitted from the interest in ChatGPT at their debut, seeing a respective 340K and 335K downloads across iOS and Android in their best five-day periods in February, OpenAI’s ChatGPT app easily topped them, generating 480,000 installs in the first five days of its U.S. launch, when the app was iOS-only.

Compared with just Bing and Edge’s iOS downloads alone, ChatGPT was even further ahead with its 480K installs versus Bing’s 250K and Edge’s 195K.

Image Credits: data.ai

However, Bing and Edge were still ahead of ChatGPT when looking at all U.S. downloads in May across both app stores — but not when comparing only iOS installs for the month. That indicates ChatGPT may soon pull ahead of these search-focused alternatives.

Image Credits, above and below: data.ai

Data.ai’s analysis also found the app outperformed other top AI chatbot apps in the U.S., many of which were generically named in order to capitalize on consumer searches for keywords like “AI” and “chatbot” on the App Store. Here, OpenAI’s ChatGPT found itself in the top five by downloads, when ranked against other apps’ best five-day periods in 2023 across the App Store and Google Play.

The only app to beat it was “Chat with Ask AI,” which saw 590,000 installs from April 4-8, 2023, compared with ChatGPT’s 480,000 installs from May 18-22, the data indicates.

Image Credits: data.ai

Though it’s only been available for a week, ChatGPT is also already ranking in the top five among AI chatbot apps by downloads in the U.S. in May 2023. At the time of data.ai’s number crunching, it compared ChatGPT and other top chatbot apps for the month of May through the 23rd — so, technically, less than a week since ChatGPT’s launch.

By then, the app had seen 550,000 downloads, tying with Genie – AI Chatbot, the next nearest ranked AI chatbot app by May downloads on the U.S. App Store. A few others were further ahead, however, including ChatOn — AI Chat Bot Assistant (610K installs), AI Chatbot – Nova (680K installs), and Chat with Ask AI (1.4M installs). Still, given how quickly ChatGPT was able to top half a million installs, it may soon beat these rivals.

Image Credits: data.ai

In addition, ChatGPT had one of the best new app debuts this year — and in 2022, data.ai found.

In ChatGPT’s first five days of U.S. iOS downloads post-launch, it generated 480,000 installs, which ranked it as the No. 2 biggest app launch, behind Truth Social, which saw 630,000 downloads. The next largest debuts (i.e, the first 5 days post-launch) included the March 2023 arrival of Widgetable: Lock Screen Widget (360,000 installs), and the 2022 launches of MyNBA2K23 (310,000 installs) and sendtit – Q&A on Instagram (260,000 installs).

This also put ChatGPT in the >99.99th percentile for new app launches in the U.S. since 2022 on iOS.

Data.ai notes that only the top 1% of apps generated more than 10,600 U.S. downloads in their first five days, and the top 0.1% had more than 45,000. Its analysis included data for roughly 39,000 apps that launched in the U.S. on iOS since the start of 2022 and then ranked in the top charts at some point over this period. (The data doesn’t include Apple’s first-party apps, like Apple Music Classical).

Image Credits: data.ai

Of course, installs are only one way of measuring consumer demand and are not as reliable as analyzing how many people then signed up and became active app users.

However, because ChatGPT is still so new, data.ai won’t yet have accurate estimates on metrics like daily or monthly active users, it says — that may take another few weeks to generate.

The official ChatGPT app is now available in 11 more countries

OpenAI launches an official ChatGPT app for iOS

Microsoft To Ape Google’s Success With Windows AI DevTools

This year’s Microsoft Build had something for everyone. From the average end user to the enterprise to developers, Microsoft announced a variety of products and AI enhancements to its product suite. However, to keep this momentum going, it also created a suite of development tools, wishing to harness the power of the open-source community to strengthen their ecosystem.

Over the course of the conference, Microsoft announced a variety of developer tools targeted at empowering developers. While this is to be expected for a conference focused on developers, it also comes against the unique background of Microsoft’ AI innovations.

AI tools for all

Apart from updates to GitHub Copilot, Microsoft focused on opening up access to its ecosystem to budding AI developers. The biggest among these announcements was arguably the launch of the Windows AI library. According to Microsoft, this library will help developers create AI applications on both ARM and x64-based machines.

Capitalising on the large and diverse install base of Windows, developers can get curated models from Microsoft and integrate them into their applications. However, one caveat is that the AI features of the applications function on a hybrid basis, as opposed to on-device processing, putting in API calls to Azure for AI inference.

However, for edge processing, Microsoft has also worked closely with hardware providers like Intel, Qualcomm, AMD, and NVIDIA for AI support on the silicon level. Through this partnership, Microsoft says that more Windows devices will include neural processing units, allowing developers even more AI processing power without having to resort to the cloud.

The company also released a plugin standard for Microsoft 365 Copilot. This new feature allows developers to integrate their own apps and services into Microsoft 365 Copilot. This not only extends the usability of Microsoft’s applications, but also allows for connection with third-party applications. The company also launched the Teams Toolkit to make the plugin dev process easier.

The Microsoft Store is also getting a dev-focused update in the form of the MS Store AI Hub. Microsoft stated that this will be a dedicated section in the store that will “curate the best AI experiences built by the developer community and Microsoft.”

The developer ecosystem is a huge value add in the AI age. Examples range from fully open source projects like AutoGPT and LangChain, created to amplify the potential of LLMs, to optimisations to already existing models, like Vicuna and Alpaca (based on Meta’s LLaMA). It seems that Microsoft also wants to capitalise on the dev ecosystem wave to make Windows 11 better than ever before.

Microsoft is learning from past failures

The strategy of leveraging the open-source developer ecosystem is nothing new to tech giants. In fact, this strategy was refined and perfected by Google with the Android ecosystem. After Microsoft’s own failure to create a competing mobile phone operating system (OS), it seems that the company wishes to bring this model to their desktop OS.

While there have been many failures in Microsoft’s long and storied history, the biggest one by far was the Windows Phone. Launched in competition against Android and iOS, Microsoft famously spent $400 million launching this ill-fated mobile OS. While there are many reasons that the product line and accompanying OS bit the dust soon after, the biggest of them was lack of support from the developer ecosystem.

To look at how instrumental developers are for the success of a platform, one needn’t look further than Android. Competing against iOS, which had a first-mover advantage in the market, Google opened the door for developers to pick up the slack. They did so by promoting an open market for devs, reducing the barrier of entry by creating robust dev tools, and even allowing them to modify the kernel of the OS.

While Google created a healthy dev ecosystem for Android, Microsoft’s Windows Phone neglected these core components. Instead, they chose to incentivise developers with money, even going so far as to write apps for them. However, devs shied away from this, mainly due to the low market of the OS, as well as difficulty developing for the platform.

With Windows 11 and the Copilot revolution, it seems that Microsoft is moving past its shortcomings with Windows Phone and hooking into the developer ecosystem. The developer community is well known for being a fountainhead of innovation and endless labour, making them a great fit for the AI ecosystem Microsoft has opened up with Windows Copilot.

By passing on the work of creating novel plugins and applications to the wider ecosystem, Microsoft not only saves money, but increases the value present in the ecosystem. This is also seen by the AI showcase in the Microsoft Store, which provides a platform for developers to reach a wider audience for their applications and plugins.

Just as opening up the Android ecosystem helped Google grow into the behemoth it has today, allowing low-level access to Copilot will also change the game for Microsoft. The offerings at Build paint a picture of Microsoft’s goals to use the dev ecosystem as rocket fuel to reach for the stars.

The post Microsoft To Ape Google’s Success With Windows AI DevTools appeared first on Analytics India Magazine.

New Thoughts on Leveraging Cloud for Advanced AI

New Thoughts on Leveraging Cloud for Advanced AI Sponsored Content by Microsoft/NVIDIA May 25, 2023

Artificial intelligence (AI) is becoming critical to many operations within companies. As the use and sophistication of AI grow, there is a new focus on the infrastructure requirements to produce results fast and efficiently. Many companies find that firing up cloud instances is not enough. Instead, companies must take a more strategic view of their cloud adoption to have the IT foundation required to fully use state-of-the-art AI. Doing so can deliver significant results across a wide variety of industries.

Specifically, AI requires an infrastructure that can meet the constantly increasing demands for high-performance compute and specialized needs of AI applications and workloads such as natural language processing, machine learning, and deep learning. To that point, a suitable infrastructure to support advanced AI must easily scale up and out.

Cloud infrastructure purpose-built for advanced AI

The recent Harvard Business Review whitepaper Analytic Services Rethinking Cloud Strategies for Advance AI noted the benefits of such an AI-first infrastructure and quantified how companies in different industries benefit from its use. According to the white paper, advanced AI applications must be supported by a cutting-edge infrastructure that provides the performance, flexibility, and scalability that these applications demand. But not just any cloud will do.

The diversity of cloud offerings gives organizations many options for their AI needs. That is particularly the case with generative AI. So, the question has shifted from whether to use the cloud for AI applications to which cloud provider best aligns with a company's strategic vision for AI. The selection will depend on the capabilities of the cloud vendor and the ecosystem of partners and vendors that is built around the vendor’s offerings.

These and other points were the subjects of a recent Harvard Business Review Analytic Services Webinar: Rethinking Cloud Strategies for Advanced AI. The webinar discussed cloud strategies to support advanced AI. (The webinar can be viewed on-demand here.) the speakers included IDC's Ritu Jyoti and Nidhi Chappell, General Manager of Azure HPC for AI, SAP and Confidential Computing, at Microsoft. Their talk examined how advanced AI creates unprecedented growth opportunities, the problems companies face related to cloud and AI technologies, and how to choose the right cloud platform for your AI goals.

Let's look at some examples from the leading companies in healthcare, automotive, fashion, and conservation featured in this Harvard Business Review Analytic Services whitepaper.

Innovative AI-led personalized cancer treatments

While radiology used to diagnose cancer has long embraced AI, Elekta, a Stockholm-based Swedish maker of precision radiation therapy solutions, focused on a related but more involved area: radiotherapy, which is used to treat cancer. Elekta found that many people worldwide do not have access to the needed personalized therapy, not because of a lack of technology but because of insufficient medical personnel from diverse disciplines that must collaborate to ensure the correct adjustments are made to treatment plans.

“We realized the tsunami of AI innovations that were happening in the computer vision and text recognition fields were eventually going to find their way into the medical field, as well,” said Rui Lopes, Director of New Technology Assessment at Elekta.

To address the problem, it embeds intelligence into devices to increase access to treatment for a larger swath of patients worldwide. “This provides not just personalization of care but democratization of a standard of care, allowing more advanced protocols to be deployed in regions of the world that lack the human capital to do so now,” said Lopes.

The models Elekta uses must easily scale. “You need to radically scale up the amount of data you use,” said Adam Moore, Director of Global Cloud Solutions for Elekta. “By training the models in the cloud, you can identify those problems earlier and build resilience into your compute infrastructure, so you avoid hardware failures.”

“We rely heavily on Azure cloud infrastructure. With Azure, we can create virtual machines on the fly with specific GPUs. If that’s not enough, we can cancel that virtual machine, create a new one, and then scale up as the project demands,” says Silvain Beriault, Lead Research Scientist at Elekta.

Developing a new generation of autonomous vehicles

Wayve, a London startup, is trying to bring deep learning and AI to the next generation of autonomous driving, something it calls AV2.0 (autonomous vehicles 2.0). In particular, the company wants to accelerate and scale autonomous vehicle development by using vision-based machine learning for rapid prototyping and quick iteration.

“Advanced AI, the latest and greatest, is absolutely pivotal to what we’re doing,” says Jamie Shotton, Chief Scientist at Wayve. “We have to train the algorithm on petabytes and potentially greater amounts of data that we’ve captured from our fleet of cars, which is a radically different approach to autonomous self-driving than anyone has done before.”

Moving to Azure infrastructure allows Wayve to rapidly improve its iteration speed and innovation rate for new autonomy features, which, in turn, helps cars drive better. Through its use of Azure Machine Learning, the company trains its AV2.0 models 90 percent faster compared to its previous data center environment.

“Using a managed platform gives us the ability to scale quickly and reliably. It also allows us to focus our efforts doing the research and solving problems around autonomous self-driving rather than building additional tools ourselves,” Shotton says.

Creating new fashions at the speed of the market

Fashion is one of the fastest-growing, most lucrative, and demanding industries, with high expectations of quick turnaround rates, creative designs, and a constant parade of new styles. As such, Portugal-based Fashable is trying to change the fashion industry with AI.

“In the near future, you will have a digital closet of clothing designs that you can ask a manufacturer to produce just for you,” says Orlando Ribas Fernandes, CEO and Co-founder of Fashable, ““We will use the metaverse to create physical goods that are exclusive to each person.”

Using Azure AI infrastructure, powered by NVIDIA GPUs for deep learning, Fashable built a generative AI application that can create dozens of original AI-generated clothing designs in minutes without the need for actual material. The algorithm ingests data from multiple sources to learn about trends, styles, and clothing types. Using social media to do A/B tests directly with customers lets designers gauge interest and forecast demand for their particular creations before going into production.

“We can share the collection with customers before they are produced, avoiding the problem of overstock,” says Orlando Ribas Fernandes, CEO and Co-founder of Fashable.

Protecting endangered species from wildlife crime

Wildlife Protection Solutions (WPS) use artificial intelligence on remote camera images for the conservation of endangered species and ecosystems. Its work helps recognize threats, classify species, and aids in anti-poaching to prevent human-wildlife conflict.

“Conservation is a huge challenge globally, and we’re not necessarily winning the war,” says Eric Schmidt, Executive Director of the Organization. To improve its odds in the fight, WPS is arming itself with AI models that search images from thousands of camera feeds, looking for humans and vehicles that may be engaged in suspicious activities or animals that may be encroaching on human populations.

For its AI needs, WPS uses Microsoft Azure's purpose-built AI infrastructure powered by NVIDIA GPUs. For example, the group's wpsWatch platform analyzes and monitors many inbound images from the remote cameras hosted in more than 100 sites across almost 20 counties. It is powered by Microsoft Azure VMs (virtual machines) with NVIDIA GPUs (graphics processing units) and was initially focused on the security and anti-poaching elements of the group’s mission.

A look to the future

These examples demonstrate the growing use of purpose-built infrastructure for AI. As companies increasingly adopt the latest AI technologies, like Generative AI, to transform their applications and derive business and economic value from AI, access to such an infrastructure will be critical for quickly getting value from AI economically.

Learn more

Read the Harvard Business Review Analytic Services whitepaper “Rethinking Cloud Strategies for Advance AI“ and watch the webinar.

Visit the Microsoft and NVIDIA HPCwire Solution Channel for more articles and insights.

#MakeAIYourReality
#AzureHPCAI
#NVIDIAonAzure

Related

What to do if Generative Fill is grayed out in your Adobe Photoshop AI beta

sample-image-16-9-red.jpg

Generative Fill is part of the new Adobe Photoshop hotness coming as a result of Adobe's Firefly and Sensei AI efforts. The company has released a beta version of Photoshop with the new feature which is available to anyone with a Creative Cloud subscription who downloads from the Beta channel.

Also: The best AI art generators to try

Unfortunately, after downloading the beta, some people are finding that the Generative Fill feature is disabled. #SoFrustrating.

The good news is this is fixable, but it's sure not intuitively obvious. Here's how to enable Generative Fill on your up-to-date computer if it's grayed out.

How to enable Generative Fill

Also, make sure you're logged into the Creative Cloud desktop app. We'll come back to that in a bit and it helps if you're logged in.

But now you know the cautionary concern about sharing your birthday, here's how to share your birthday and unlock the Generative Fill feature. From within Creative Cloud Apps tab, scroll down and click Behance on the lower left.

This will take you to this worrisome, but necessary screen:

Assuming you're willing to risk sharing your personal information with Adobe for access to Generative Fill, give Behance your month and year of birth. For those who don't use the service, Behance is a social media platform that lets you showcase your work to other Adobe users. So, of course it becomes the gatekeeper to a new AI feature…because…I'm sure there were meetings.

Also: Google's Bard AI says urgent action should be taken to limit (*checks notes*) Google's power

In any case, once you enter your information, quit the Photoshop beta and relaunch. Now, you'll have Generative Fill enabled.

So, what can you do with Generative Fill?

It fills in areas you select with images generated by the AI. In my testing, it's hit or miss, but here's an example that's pretty impressive. Let's start with a photo of a truck I took some time ago:

As you can see, there's no driver. I selected the front and side windows, and when I clicked Generative Fill, I gave it the prompt "driver inside truck." Here's what Photoshop provided:

It did something weird with the side mirror, but it also added someone inside the truck. Note how good the shadow is on the driver's arm. That's impressive.

How does this differ from Midjourney and DALL-E?

Midjourney and DALL-E generate entire images from text prompts. Photoshop's Generative Fill lets you work with areas of your own images and add features and details in specific locations.

Also: Human or bot? New Turing test AI game challenges you to take your best guess

Stay tuned, though, because this is only early days. So, are you going to give up your birth date information and give Generative Fill a try? Let us know in the comments below.

You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

More on AI tools

Meet Aria: Opera’s new built-in generative AI assistant

Aria in Opera

Aria is always accessible on the built-in sidebar to the left.

As Microsoft incorporates more artificial intelligence tools to their products, Opera rolled out one of its own. Opera has just debuted Aria, a new AI chatbot that is built into the sidebar of its browser, like the new Bing is built into Microsoft Edge, and is available for free.

Using Opera's "Composer" infrastructure, Aria harnesses OpenAI's GPT technology and features enhanced capabilities, like access to live results from the web, text and code generation, and advanced customer support.

Also: How to use the Opera VPN

Much like the new Bing chat is powered by OpenAI's GPT-4, Aria boasts similar technology as ChatGPT, with further training from Opera and internet access.

Aria joins ChatGPT, Facebook Messenger, TikTok, and WhatsApp on the Opera browser sidebar, as the company's own, natively built-in generative AI assistant. The new assistant has access to the internet, so its knowledge is not limited to content before 2021, like the free version of ChatGPT, though the ChatGPT Plus version will differ.

Also: How to enable tracker blocking in Opera One

As such, Aria is just the latest development in Opera's plan to integrate generative AI into its browsing experience, following the ChatGPT integration and recent launch of Opera One.

How to try out Aria

Users that want to access Aria just need to download the newest version of Opera One, if on desktop, and Android users can download the beta version of the browser from the Google Play store. Access to Aria hinges on having an Opera account, which anyone can create for free with a valid email address.

Opera has made Aria an expert on its database of support documentation and combines that with its knowledge of the company's current products to help customers in need of assistance.

Also: How to fix Opera not displaying Facebook and Twitter videos issue on Linux

This same Composer infrastructure makes Aria capable of connecting to other AI models and Opera has future plans to integrate more features such as multiple search services from different Opera partners.

At this time, Aria is in the beginning stages of Opera's project, but the company hopes to integrate it further into the browsing experience, eventually blending it to assist with cross-browser tasks.

More on AI tools

How This Bengaluru Boy Cracked Netflix, Twitter and DoorDash Interviews

Each year, over 15 lakh engineers graduate in India, out of which a mere two lakh get employed. The startling conversion rate could be a reflection of the low job demand or poor employability of candidates, but most importantly it highlights the fierce competition that young tech graduates face today.

But, however dire the situation may be, through planning and preparation techniques, one can stand out from the crowd – be it a fresh graduate looking to land their first job or an engineer seeking growth in career. To take you through an engineer’s growth plan, Analytics India Magazine got in touch with Vipul Bharat Marlecha from Netflix’s engineering team.

Bengaluru boy Vipul works as a senior data engineer at Netflix where he works with the open connect team. Having completed his BE from PESIT (Bangalore) and a master’s degree in computer science from New Jersey Institute of Technology, he has essentially worked with data engineering teams, in companies such as Twitter and DoorDash. However, like the majority of engineers, knowing one’s interest in the field was no easy feat, and it happened only along the way. “When I was a software engineer working on test automation frameworks, I chanced upon the role of data engineering and wanted to try it for three months. I loved what I was doing and decided to make a career out of it.”

Evolving Interviews

Vipul has noticed that the interview process for fresh graduates across companies is similar, and the focus changes as you get into senior roles. “As a young engineer, 70% of what you prepare is standard across tech companies.” Two major rounds in big tech companies are: the coding round that takes care of data structure and algorithmic knowledge, and the system round which is more of a current theoretical system knowledge for freshers where the expectations are not as high as that from a senior person. The remaining rounds are company specific, like HR and culture rounds to see if the candidate is a fit for the organisation. The weightage for these rounds varies from company to company.

“Companies like Netflix and Amazon give more weightage to culture round to understand how you work as a team, how you lead a team, and lead the vision.” Fresh graduates can prepare for this round by understanding the company’s vision and work culture beforehand through their website.

Interview Preparation as a Student

As an engineering student, preparing for interviews should start as early as possible. “No matter how smart you are, how much knowledge you have or what GPA you score, cracking interviews is a different ballgame. I would recommend starting preparation in the final year itself, and have a proper strategy in advance,” he said. Being abreast with changing technology and reading technology related books on coding and system design helps.‘Cracking the Tech Career’ by Gayle Laakmann McDowell is one that can help understand interview processes at big techs.

Vipul also believes that mock interviews will help prepare a person. “Always try to find a mentor, friend or someone experienced to conduct a mock interview. There are a number of platforms that offer mock interviews.”

Traits That Stand Out

Being open to first understanding the problem an interviewer poses before jumping in to solve it helps. “As an interviewer, I would prefer a candidate who has a conversation about the problem and tries to explain the trade offs. Even if he is unable to solve the problem, it is clear that he knows what he is doing, rather than one who solves the problem without telling me anything.

Networking is the Key

“Networking matters.” Vipul believes that when we are young, we need to talk to as many people as possible. “Don’t be shy, and don’t be afraid to get out of your comfort zone.” Networking is not just for opportunities, and not every conversation has to be useful right away, but it can be helpful in the long run. “Today, there are multiple opportunities for networking — conferences such as DES, career fairs, forums, and college events such as alumni meets, hackathons, etc provide opportunities to network.” Vipul considers it a two-way street. “Today someone might help you, and tomorrow you will be helping another.” The key is to connect with as many people as possible.

The effort however, does not end at landing the first job. It is a continuous process and one needs to keep working in order to grow in their career.

Growth as an Engineer

Vipul has worked with various companies, and each shift required planning and preparation. Most importantly, the willingness to explore. “As an engineer you always need to be open-minded. When you are studying for your course, say Masters, you wish to get into a specific role, but it does not necessarily happen, either because you don’t get that opportunity or you get selected for a different role. However, that’s okay because as you work you will slowly realise what interests you and you would accordingly shift to get there. When you apply your knowledge is when you realise what you like to do.”

Vipul also spoke about how shifting jobs in engineering roles as a junior is much easier. “You have basic programming knowledge, and coding, which will help you switch, and as a beginner, the expectations are low which means that you have huge opportunities to learn and grow, as opposed to the expectations from a mid- or senior-level engineer.”

Vipul strongly believes that having a good mentor and manager has always helped with his career growth. “You need to be open about the conversations you have with your manager about your career goals, and how you want to navigate through your career. Having regular feedback sessions with them will help with your growth. In case of bad feedback, you can have deeper conversations to know what the problem is and set a plan to improve yourself.”

The post How This Bengaluru Boy Cracked Netflix, Twitter and DoorDash Interviews appeared first on Analytics India Magazine.

Are self-driving trucks the key to supply chain issues?

brian-stalter-arotxe540N4-unsplash

Global supply chains remain in crisis after several national and international events. However, a massive truck driver shortage is a significant cause of delays and missed deliveries. Many companies are turning to automated trucks to solve this problem, but are they the key to supply chain issues?

Pros of Self-Driving Trucks for the Supply Chain

There are significant positives to introducing self-driving trucks to the supply chain.

Safety

One of the top benefits of automated trucks is safety. Though businesses are still evaluating and mitigating risks, they could be the key to preventing the thousands of accidents involving commercial trucks that happen each year.

Trucking companies must balance delivery needs with the vital role of protecting their drivers. Sleep is a problem for truck drivers and the current shortage leads to increased pressure to meet delivery deadlines. To get enough sleep to safely drive, they have to rush, which can lead to poor decision-making and accidents.

If not, they drive while fatigued, which also increases accident risk. Manufacturers program self-driving trucks to stay in the right-hand lane and go the speed limit, reducing risky maneuvers. The trucks are predictable and other drivers can feel confident they will not pull out in front of them or travel too fast.

Time

Since human exhaustion is not a consideration with these trucks, they can be on the road for much longer. With a struggling supply chain and consumers relying on fast deliveries, these automated vehicles may be a top benefit. Without human needs, they can go directly from point A to point B and take advantage of lower traffic times, like the middle of the night when drivers would typically be too exhausted to continue their route safely.

Efficiency

The speed-controlled benefits of automated trucks also make them more efficient. Though drivers may work to stay at a consistent speed, it is easier for self-driving computers to control it.

Companies with efficient vehicles can reduce their impact on climate change. They can meet efficiency goals while having more trucks on the road simultaneously.

Cons of Self-Driving Trucks for the Supply Chain

While there are potential solid positives to introducing automated trucks, concerns surrounding their introduction and prolonged use exist.

Safety

There are many safety benefits to these trucks, but problems still need addressing before calling them sufficiently safe. Automated vehicles run on a computer system, which can have glitches or even fail. A scary incident in 2022 highlighted the potential problems with these trucks.

A system failure led to one traveling across lanes haphazardly and crashing into a barrier. Thankfully, the crash did not cause injuries to surrounding passengers, but it shows there is still a long way to go in perfecting this technology. This accident is not an isolated incident and shows the hazards fleets of self-driving trucks would cause without making changes.

There are also navigation system concerns. While they should stay in the right-hand lane, predicting every scenario the computers need to cover is near-impossible. Driving is unpredictable — sometimes, split-second decisions are necessary. There are safer, more convenient options without relying on placing automated trucks on public roadways.

Price

Automated trucks are expensive, but upgrading a still-functional fleet can save time and money. Most of these trucks cost more than $250,000, while conventional models typically cost around $150,000. This huge price difference can make it harder for companies to afford an entire delivery fleet, negating the supply chain advantages of automation.

Jobs

There’s no question that introducing self-driving trucks will eliminate the need for drivers. While there is currently a significant shortage of drivers, there is still a large number of dedicated, unemployed workers.

Many employees fear what automated processes will do to their jobs, making the drivers hesitant to stay in the industry. When a company does not have enough human employees, it is harder to cross-correct when errors happen, slowing down the supply chain. Humans created software that automates vehicles and a lack of jobs can prevent necessary personnel from jumping into action when things go wrong.

Conclusion

These trucks are game-changing for the businesses using them, creating smoother and faster deliveries and improving the supply chain. However, innovators must address particular aspects before moving forward with automated fleets.

12 VSCode Tips and Tricks for Python Development

12 VSCode Tips and Tricks for Python Development
Image by Author

Virtual Studio Code (VSCode) is one of the popular Integrated Development Environments (IDE) for Python development. It is fast and comes with rich features that make the development experience fun and easy.

VSCode Python extensions are one of the prominent reasons that I use it for all work-related tasks. It provides you syntax autocomplete, linting, unit testing, Git, debugging, notebooks, editing tools, and the ability to automate most of your tasks. Instead of doing things manually, you get to either press keyboard shortcuts or click a few buttons.

In this post, we will be learning how we can take the VSCode to the next level and get more productive at building Python software and solutions.

Note: If you are new to VSCode and want to learn all of the basics, read the Setting Up VSCode For Python tutorial to understand key features.

1. Command line

You can launch VSCode from Terminal or Bash using CLI commands.

  1. Open VSCode in the current directory: code .
  2. Open VSCode in the current directory in the most recently used window: code -r .
  3. Create a new window: code -n
  4. Open file diff editor VSCode: code --diff <file1> <file2>

2. Command Palette

Access all available commands and shortcuts based on the current context. You can initiate Command Palette by using the keyboard shortcut: Ctrl+Shift+P. After that, you can type related keywords to access specific commands.

12 VSCode Tips and Tricks for Python Development
Image by Author 3. Keyboard shortcuts

What is better than a Command Palette? Keyboard shortcuts. You can modify keyboard shortcuts to your needs or learn about default keyboard shortcuts by reading the keyboard-shortcuts reference sheet.

Keyboard shortcuts will help us access the commands directly instead of scrolling through the command palette options.

4. Errors and warnings

Quickly access the errors and warnings by using the keyboard shortcut: Ctrl+Shift+M and cycle through them by clicking on the warning or pressing F8 or Shift+F8 keys.

12 VSCode Tips and Tricks for Python Development
Image by Author 5. Fully Customizable Development Environment

You can customize themes, Icons, keyboard shortcuts, debugging settings, fonts, linting, and code snippets. VSCode is a fully customizable developer environment that lets you even create your own extension.

6. Extensions

Python’s VSCode extensions can improve the development experience and make you productive. It is not all about productivity. It is also about visuals. Most popular Python extensions on the Visual Studio Marketplace provide interactive GUI with stats and graphs.

12 VSCode Tips and Tricks for Python Development
Image by Author

Check out my list of 12 Essential VSCode Extensions for Data Science that will make VSCode a super app so that you can perform all of the data science tasks without leaving the application.

7. Jupyter Notebook

The most important extension that lets you perform data analysis and machine learning experiments is the Jupyter Notebook extension.

12 VSCode Tips and Tricks for Python Development
Image by Author

This extension is highly recommended for data scientists for performing data science experimentation and building production-ready code.

8. Multi-cursor selection

Multi-cursor selection is a lifesaver when you have to do multiple edits of the same instance.

  • Add multiple cursor points by using Alt+Click
  • To set the cursor above use Ctrl+Alt+Up or below Ctrl+Alt+Down
  • Add additional cursors to all occurrences of the current selection using Ctrl+Shift+L

12 VSCode Tips and Tricks for Python Development
Image from Visual Studio Code 9. Search and modify

I know this is a simple feature but it is quite handy when you are editing similar variables, arguments, and parameters at various places in the file. You can search and replace them one by one or all at one.

To rename the symbol or argument, select the symbol and press the F2 key.

12 VSCode Tips and Tricks for Python Development
Image by Author 10. Built-in Git Integration

It is a built-in integration that allows you to perform all Git-related tasks by clicking on a few buttons instead of typing the Git command in CLI. You can visualize history, see the difference, and create new branches all by interacting with a user-friendly GUI. It is even easier than the GitHub Desktop app.

12 VSCode Tips and Tricks for Python Development
Image by Author 11. Code Snippets

Code snippets are just like autocomplete, but you have more power over them. You can create custom code snippets for repeating code patterns. Instead of creating a Python function, you can type a word, and it will fill the rest.

To create a custom code snippet, select File > Preferences > Configure User Snippets and then select the language.

12 VSCode Tips and Tricks for Python Development
Image by Author 12. GitHub Copilot

Everyone is talking about ChatGPT for code suggestions, but GitHub Copilot has been there for more than two years, and it is getting better at understanding user behavior and assisting them in writing fast and effective code. GitHub Copilot is based on GPT-3, which enhances the development experience by suggesting lines of code or entire functions.

12 VSCode Tips and Tricks for Python Development
Image from GitHub Copilot

Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master's degree in Technology Management and a bachelor's degree in Telecommunication Engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.

More On This Topic

  • Streamlit Tips, Tricks, and Hacks for Data Scientists
  • Quick Data Science Tips and Tricks to Learn SAS
  • Tips & Tricks of Deploying Deep Learning Webapp on Heroku Cloud
  • MathWorks Deep learning workflow: tips, tricks, and often forgotten steps
  • 4 Tricks to Effectively Use JSON in Python
  • Free eBook: 10 Practical Python Programming Tricks

TikTok is testing an in-app AI chatbot called ‘Tako’

TikTok is testing an in-app AI chatbot called ‘Tako’ Sarah Perez @sarahintampa / 8 hours

AI chatbots, like ChatGPT, are all the rage, so it’s no surprise to learn that TikTok is now testing its own AI chatbot, as well. Called “Tako,” the bot is in limited testing in select markets, where it will appear on the right-hand side of the TikTok interface, above the user’s profile and other buttons for likes, comments and bookmarks. When tapped, users can ask Tako various questions about the video using natural language queries or discover new content by asking for recommendations.

For instance, when watching a video of King Charles’ coronation, Tako might suggest that users ask “What is the significance of King Charles III’s coronation?”

Or, if users were looking for ideas of something to watch, they could ask Tako to suggest some videos on a particular topic — like funny pet videos. The bot would respond with a list of results that include the video’s name, author and subject, as well as links to suggested videos. From here, you could click on a video’s thumbnail to be directed to the content.

Image Credits: TikTok screenshot by Watchful.ai

The bot was discovered being publicly tested by app intelligence firm Watchful.ai, and TikTok confirmed the tests are now live.

“Being at the forefront of innovation is core to building the TikTok experience, and we’re always exploring new technologies that add value to our community,” a TikTok spokesman told TechCrunch. “In select markets, we’re testing new ways to power search and discovery on TikTok, and we look forward to learning from our community as we continue to create a safe place that entertains, inspires creativity and drives culture.”

However, though Watchful.ai says it found the AI chatbot in tests on iOS devices in the U.S., TikTok says the current version of the bot is not currently public in the U.S., but it is being tested in other global markets, including an early limited test in the Philippines.

We also understand the bot will not appear on minors’ accounts.

Behind the scenes, TikTok is leveraging an unknown third-party AI provider that TikTok has customized for its needs. That modification does not include the use of any in-house AI technologies from TikTok or parent company ByteDance.

Upon first launch, TikTok advises users in a pop-up message that Tako is still considered “experimental” and its feedback “may not be true or accurate” — a disclaimer that applies to all modern AI chatbots, including OpenAI’s ChatGPT and Google’s AI, among others. TikTok also stresses that the chatbot should not be relied on for medical, legal or financial advice. (We understand the wording in the image below may reflect an earlier version of the bot rather than the current tests.)

Image Credits: TikTok screenshot by Watchful.ai

The disclosure also notes that all Tako conversations will be reviewed for safety purposes and, vaguely, to “enhance your experience.” This is one of the complications that come with using modern AI chatbots, unfortunately. Because the technologies are so new, companies are opting to log customer interactions and review them to help their bots improve. But from a privacy standpoint, that means the AI conversations are not being deleted after chats end, which poses potential risks.

Some companies have worked around this consumer privacy concern by allowing users to delete their chats manually, as Snap has done with its My AI chatbot companion in the Snapchat app. TikTok is taking a similar approach with Tako, as it also allows users to delete their chats.

It’s unclear if the AI chatbot is logging data associated with the user’s name or other personal information, though. The long-term data retention policies or privacy aspects of the chatbot also couldn’t be determined at this time.

Image Credits: TikTok screenshot by Watchful.ai

The security risks of AI chatbots have led some companies to ban such bots at work, including Apple, which has gone so far as to restrict employees from using tools like OpenAI’s ChatGPT or Microsoft-owned GitHub’s Copilot over concerns about confidential data being leaked. Others who have recently enacted similar bans include banks like Bank of America, Citi, Deutsche Bank, Goldman Sachs, Wells Fargo and JPMorgan, as well as Walmart, Samsung and telecom giant Verizon.

Why consumers would even want an AI chatbot in TikTok is another matter.

While most companies are experimenting with AI in some way, shape or form, TikTok believes the chatbot could do more than just answer questions about a video — it could also become a different way for users to surface content in the app, beyond typing into a search box.

This could become a threat to Google if TikTok’s tests were successful and the chatbot publicly rolled out, given that Google has already noted how Gen Z are turning to TikTok and Instagram as the first place they go to search on certain subjects. Soon, Google will begin rolling out a conversational experience in search, but if TikTok had its own in-app AI chatbot, that could encourage younger users to bypass Google altogether.

Update, 5/25/23, 9 AM ET: At the time of publication, TikTok shared additional information about Tako on its Twitter account. We’ve updated with additional details, where relevant.

1/ We're in the early stages of exploring chatbot tools with a limited test of Tako with select users in the Philippines. Tako is an AI-powered tool to help with search and discovery on TikTok.

— TikTokComms (@TikTokComms) May 25, 2023

Generative AI is Having An Edison Moment

The dangers of not understanding innovation money are best illustrated by the battle between two inventors: Thomas Edison and Nikola Tesla. The former produced inventions largely through trial and error yet capitalised it.

Tesla’s ideas arguably were brilliant. The visionary was even described by Edison as someone whose “ideas are magnificent.” But he was simply unable to attract the right financial resources to commercialise his ideas.

Currently,generative AI models are having an Edison moment. The companies making the models accessible though products and services are thriving more than the research which can be comparatively more impactful. Tech bros who don’t want to miss the opportunity; currently implementing oversold technologies should remember some fundamental realities about tech bubbles.

These phenomena’s shelf life depends upon the narratives about how the specific technology will affect societies and economies, as professors Brent Goldfarb and David Kirsch noted in their 2019 book, Bubbles and Crashes: The Boom and Bust of Technological Innovation. Unfortunately, the early narratives that emerge around new technologies commonly fail to meet expectations.

GenAI is not AI

Companies are running after the text-to-anything as we’ve noticed during their latest annual conferences; Google I/O, Microsoft Build, and IBM Think. The companies have been in an aggressive tussle since the release of ChatGPT, in late November last year. The developer’s who showed off their latest technologies have been working 24/7, and really hard, under pressure to deliver the tech, and the same was resonated by some of the speakers at these conferences.

Google chief Sundar Pichai kicked off the Google I/O 2023 with “AI is having a very busy year” and then announced a list of Google’s products which will henceforth be integrated with its language model PaLM 2. Similarly during Microsoft Build, the company chief Satya Nadella chanted the ‘Copilot’ mala. Even though the investors were impressed with AI being embedded in every facet of the Redmond giant. However, the shares fell fractionally.

Earlier, in February’s Paris debacle when Google promoted its AI chatbot Bard, the company’s stock sank 7%, Google employees responded by describing Pichai’s announcement as “rushed” and “botched.”

Meanwhile, other brilliant innovations are turned blind eye towards or not given enough appreciation. Out of the 100 announcements at the I/O, around 40 were about generative AI; and there was hardly any new update related to AlphaFold, which is revolutionising the life sciences landscape.

Other jaw dropping feats which have been turned a blind eye towards due to the distraction caused by language models include, DragGan, Meta’s CICERO. DeepMind’s nuclear fusion controlling algorithm and the list goes on.

“Looking back, it’s amazing how easy things were for researchers when I was a young man. In comparison to just how competitive the field has become,” the 80-year-old American computer scientist Jeffrey Ullman told AIM, saying that academics and researchers who could be excellent teachers, or focus on other groundbreaking innovation, are forced into doing second and third grade research because that is how they get recognised or promoted.

The Shiny Object Syndrome

The AI wave highlights the persuasive issue known as the shiny object syndrome; common in the tech industry as researchers get easily distracted by novel tools and trends. In an edition of ‘Letters by Andrew Ng,’ the founder of DeepLearning.AI stated that AI has an Instagram problem. “I’m here to say: Judge your projects according to your standard, and don’t let the shiny objects make you doubt the worth of your work!” he declared.

He further addressed people doubting their work’s worth and judging it as per perfect standards set by the media. He wrote, ‘Just as pictures of people’s perfect lives in the media aren’t representative, pictures of AI developers’ postings of their amazing projects also aren’t representative.’

On a similar note, Andrew Ng’s mentor Michael Irwin Jordan told AIM that most of these are buzzwords. “Just because you’re using computer vision or ChatGPT as some part of that doesn’t necessarily change anything,” he said.

Apart from the false promise of technology affecting research it also takes a toll on the global economy. The Bank of America strategist Michael Hartnett recently noted that tech and AI are forming a risky bubble. Experts are also predicting the Silicon Valley’s darling GPT bubble to cause a meltdown similar to the major dot.com bubble which led to a stock market crash in early 2000.

The bottom line is, the internet’s favourite language models are definitely entertaining for the public eye and the industry’s cash cow but the technology is being oversold which is anticipated to disrupt the purpose of research. Amid the chaos in tech, one should recall when Tesla pinpointed impatience as researchers’ problem. Highlighting the eagerness of their ideas to work he had said, “They want to try their first idea right off; and the result is they use up lots of money and lots of good material, only to find eventually that they are working in the wrong direction. We all make mistakes, and it is better to make them before we begin.”

The post Generative AI is Having An Edison Moment appeared first on Analytics India Magazine.