Shopify Sidekick is like ChatGPT, but for ecommerce merchants

Shopify Sidekick is like ChatGPT, but for ecommerce merchants Kyle Wiggers 8 hours

Shopify, like countless other Big Tech brands, is leaning into generative AI.

At its bi-annual Editions conference, the ecommerce company announced an expanded set of features under Shopify Magic, Shopify’s catchall brand for generative AI.

As of today, Shopify Magic can provide answers to merchants’ customers tailored to their conversation histories and store policies, and generate blog post, product descriptions and marketing email content. And, via a new chatbot-like AI tool called Sidekick, Shopify can understand and interpret questions or prompts related to business decision making.

“We believe it’s our responsibility to keep the businesses we power on the cutting edge of technology,” Miqdad Jaffer, Shopify’s head of product for AI, told TechCrunch in an email interview. “There’s still so much untapped potential for AI and entrepreneurs, and we’re deeply committed to making the power of AI accessible to businesses of all sizes.”

Jaffer explained that the new Shopify Magic capabilities are powered by a combination of proprietary Shopify data, including merchant business data, paired with available large language models (LLMs) like OpenAI’s ChatGPT — architected in a way that data and conversations aren’t shared with third parties.

Leveraging that data and those models, Shopify Magic can now generate blog posts for events like holidays, business milestones and campaign ideas, offering merchants the ability to customize the tone of voice and translate content into different languages. In addition — as alluded to — Magic can now create content for customer emails from prompts just a few words in length, automatically writing weekly newsletters, announcements and more.

Given the tendency of LLMs to write content that isn’t true, or even biased and toxic content, Shopify merchants are given the chance to review copy from Magic before it goes live, Jaffer stressed. And as an added safety measure, Shopify’s AI features are never allowed to write or make changes to any Shopify production systems, he added.

Sidekick

Perhaps the highlight of Shopify’s announcements today is Sidekick, a conversational AI assistant that’s trained to “know and understand all of Shopify,” as Jaffer puts it.

Sidekick can understand questions along the lines of “How do I set up a discount for a holiday sale?” and “Help me segment my customers so I can better engage them in my marketing,” while performing tasks like summarizing information across sales documents and performing basic product research. Moreover, Sidekick can be instructed to accomplish specific to-do items such as creating reports showing a merchant’s best-selling products or walking a merchant through an email campaign orchestration tutorial.

Shopify Sidekick

Image Credits: Shopify

Sidekick also has the power to modify Shopify merchant shop designs. Among other things, it can add product collections to a home page, or suggest themes and copy for a hero banner.

In a demo video tweeted by Shopify co-founder and CEO Tobi Lütke a few weeks ago, Sidekick is shown answering a series of questions from a snowsports supply merchant. Asked why sales dipped from March to July, Sidekick responds that it was probably due to minimal snow, serving a chart showing sales volumes month-by-month. Then, asked by the merchant to “put everything on sale,” Sidekick suggests a 10% automatic discount on all products in the merchant’s store.

Shopify Sidekick

Image Credits: Shopify

From all appearances, Sidekick is basically like an ecommerce-tuned version of ChatGPT — and Jaffer said that’s the point.

“We believe that there isn’t any corner of the internet that will benefit more from AI than the pursuit of people building and growing their own businesses,” Jaffer said. “Features [like Sidekick] are constantly evolving to tailor to and act on the needs of our merchants.”

OpenAI, Google, Microsoft, and Anthropic Launches Frontier Model Forum 

Big techs unite to form the Frontier Model Forum. Led by industry biggies OpenAI, Google, Microsoft and Anthropic, the forum is aimed at promoting safe and responsible development of frontier AI systems. The forum looks to advancing AI safety research, identifying best practices and standards, and allow information sharing between policy makers and industry. This collaboration comes as a continuation to the announcement of big tech collaborating to form responsible AI as per White House guidance.

Open Collaboration

The Frontier Model Forum is welcoming participation from other organisations that are developing frontier AI models, to collaborate with them. The Forum will also support efforts that work towards meeting society’s challenges including climate change, early cancer detection and prevention, and fighting cyber threats.

The Frontier Model will establish an Advisory Board to help guide its strategy that will represent a diversity of backgrounds and perspectives. The forum will support existing government initiatives such as G7 Hiroshima, OECD’s work on AI risks, and US-EU trade.

The Forum classifies frontier models as large-scale machine learning models that surpass the capabilities present in the advanced existing models. Companies looking to apply for membership need to follow a certain set of criteria such as developing frontier models as defined by the Forum, build safe and responsible frontier AI models, and contribute towards joint initiatives of the Forum in the development and functioning of the initiative.

Anna Makanju, VP of Global Affairs, OpenAI said that it is vital for AI companies working on powerful models should “align on common ground and advance adaptable safety practices to ensure powerful AI tools have the broadest benefit possible.”

However, big tech such as Meta, Apple and Tesla are not in the picture.

What about Meta? Will Apple Join?

When it comes to large language models, the companies that have decided to come together are majorly closed-door. With the exception of Microsoft, who with their latest partnership with Meta launched Llama-2 – an open-source platform, the companies uniting are closed source. Meta is somehow now associating with the forum.

Known for accessing open source tools without contributing to the development of technology, Apple is famous for building a closed door ecosystem. However, with Apple planning to build its own chatbot nicknamed AppleGPT, their efforts in advancing AI is evident. Will this qualify as a criteria for Apple to join the forum? It is to be seen if these biggies will join the club.

The post OpenAI, Google, Microsoft, and Anthropic Launches Frontier Model Forum appeared first on Analytics India Magazine.

Unlock the Power of AI – A Special Release by KDnuggets and Machine Learning Mastery

Hello,

I hope this email finds you well, coding away and innovating in the dynamic world of Machine Learning.

Today, I am excited to announce a collaboration between Machine Learning Mastery and KDnuggets. Together, we've created something unique to enrich your Machine Learning journey.

I present to you our brand new ebook, "Maximizing Productivity with ChatGPT". While we've been known for our technical, code-heavy books that have guided many through the intricate pathways of Machine Learning, this time we're offering something different but equally impactful.

This ebook shifts the focus from pure coding and technical aspects, to understanding, interacting, and leveraging one of the most advanced AI tools in the market — ChatGPT. This is an evolution from our prior books, aimed at broadening your perspective and deepening your understanding of AI applications.

You'll discover:

  • 🔍 The foundational principles behind large language models.
  • 🖥️ Strategies for testing and interacting with LLMs through tools like GPT4All.
  • 💡 Techniques for effective brainstorming and interaction with ChatGPT.
  • 📚 Guidance on using ChatGPT as your expert helper, translator, summarizer, and even diagram creator.
  • And much more!

In celebration of this launch, we're offering an exclusive 20% early bird discount with the code "20offearlybird" at checkout. But don't delay — this offer ends soon!

👉 Maximizing Productivity with ChatGPT

This ebook is a testament to the fact that not all roads to mastering Machine Learning and AI are paved with code alone. Harnessing the power of AI also involves understanding its applications and learning how to effectively interact with it. "Maximizing Productivity with ChatGPT" offers you exactly that — an avenue to explore and master the usage of AI beyond the traditional coding confines.

If you have any questions, please don't hesitate to hit reply and send me an email directly. Here's to harnessing the power of AI together.

— Jason, Machine Learning Mastery Founder

More On This Topic

  • Unlock the Secrets to Choosing the Perfect Machine Learning Algorithm!
  • KDnuggets News, December 14: 3 Free Machine Learning Courses for Beginners…
  • Unlock the Wealth of Knowledge with ChatPDF
  • KDnuggets News, June 22: Primary Supervised Learning Algorithms Used in…
  • Unlock Your Potential with This FREE DevOps Crash Course
  • KDnuggets News, November 2: The Current State of Data Science Careers • 15…

Why Musk & Altman Want to Revive Crypto

Is crypto dead? The answer to this question depends who you are asking. A crypto enthusiast will give you a hundred different reasons explaining why crypto is not dead. And they won’t be entirely wrong either. Technology leaders like Elon Musk, Tesla Motors’ CEO, and Sam Altman, OpenAI’s CEO, share a profound fascination for cryptocurrencies and their potential role in shaping the future world.

While Musk has been advocating for cryptocurrencies for quite some time, Sam Altman’s Worldcoin went live recently. The OpenAI founder believes that in the age of AI, Worldcoin’s solution could prove to be pivotal. He told Reuters that it can help address how the economy will be reshaped by generative AI.

What’s interesting is that both Musk and Altman are big believers in AI and its potential. While Sam Altman has changed the whole AI landscape with ChatGPT, Musk, on the other hand, had helped found OpenAI in 2015. More recently, Musk has also formed a new AI company called x.AI to understand the true nature of the universe. Although they allegedly had a fallout over their divergent views on AI, both individuals are now actively involved in revitalising the world of crypto.

Drawn towards futuristic ideas

As visionary entrepreneurs, it would be alright to say that both Musk and Altman are drawn towards futuristic ideas and innovations, and cryptocurrencies represent an evolving digital landscape with potential futuristic applications.

Musk’s endeavour with autonomous vehicles, putting chips in the human brain and colonising Mars is an example of that. Similarly, Altman’s Worldcoin appears to be something from a dystopian world where an Orb scans your iris to determine whether one is a human or AI. According to Altman himself, the concept of Worldcoin is something that would be applicable ‘very far in the future.’

A global database of human IDs issued after an eye scan, paid for with useless Worldcoin token, connected with all financial transaction data of each individual managed by a centralized non-profit collecting sensitive data for KYC/AML.
WHAT COULD GO WRONG? https://t.co/8K6R17vBb0

— Anita ⚡🏳️‍🌈 Bitcoin for Fairness (@AnitaPosch) July 24, 2023

Altman has indeed acknowledged that Worldcoin comes with the ‘ick factor’; however, he does envisage that once people understand the technology and its need, they would be more open towards it. Nonetheless, despite ethical and privacy concerns having been raised for their futuristic projects, both of them are moving ahead with these endeavours with full steam.

Hence, it should not be surprising that a world where Bitcoin or any other cryptocurrencies becomes legal tender challenging the dollars and rubles of the world does seem fascinating for both. By enduring regulatory challenges, government scrutiny, scandals, and market volatility, cryptocurrencies have persisted for over a decade.

Moreover, in recent times Musk has actively criticised traditional financial systems and at the same time, endorsed cryptocurrencies. While how Altman perceives cryptocurrencies is not known entirely, Worldcoin suggests that he is a firm believer in the concept.

Altman is preparing for a post-AGI world

The premise of Worldcoin is fascinating. The goal is simple- a global financial and identity network based on proof of personhood in the AI era, Altman said. The minds behind Worldcoin, including Alex Blania and Max Novendstern, hold the belief that achieving Artificial General Intelligence (AGI), where AI surpasses human intelligence, will trigger a surge in productivity and subsequent wealth generation.

Now, this is where it gets interesting, the creators believe the wealth generated by AI should be equally distributed among all citizens of the world in the form of cryptocurrencies (Universal Basic Income), ‘preferably’ Worldcoin. Fascinatingly, both Musk and Altman have predicted that AGI will happen before 2030.

However, not everyone thinks alike. Akshay Bajaj, co-founder and chief executive at DeFiVerse believes that currently Worldcoin appears to be a little overhyped. “I guess they will have to prove themselves as not just a fancy idea. Taking on this much data globally will prove to be a big challenge. Governments across multiple jurisdictions will definitely make some objections. Even though currently the coin is mostly hype, the backers are really big names,” he told AIM.

So far, Worldcoin has raised around USD 250 million with backing from prominent venture capitalists and groups such as Andreessen Horowitz and Khosla Ventures, Reid Hoffman and also the infamous Sam Bankman-Fried. Nonetheless, besides getting rid of the ‘ick factor’, the startup has to overcome a lot of regulatory and governmental-related challenges. Also, what are the chances that in the post-AGI world, the current financial systems of the world won’t exist? Moreover, governments across the globe are working on their CBDCs today.

Musk’s approach is much more realistic

On the other hand, Musk’s plan with Twitter ( sorry ‘X’) appears to be more realistic when compared to Altman’s. “In the months to come, we will add (to X) comprehensive communications and the ability to conduct your entire financial world,” he said.

When Musk took over the microblogging platform, there were speculations that he would introduce crypto to the platform. In April 2023, it was reported that Twitter will be allowing users to trade stocks and cryptocurrencies as part of a partnership with social investing company eToro.

His vision is to turn the microblogging platform into an AI-powered global marketplace for ideas, goods, services, and opportunities. As reported by the Financial Times, Musk plans to start with support for fiat currencies and to integrate cryptocurrencies, at a later period of time, if the need arises. This could mean Musk’s vision of an AI-driven world includes a potential use case for cryptocurrencies, like Altman, or that fiat currencies and cryptos co-exists at the same time, which is a more likely scenario.

Indeed, Musk has earlier said that the current banking systems are not ‘real-time and inefficient’ and not on par with today’s digital and interconnected world. Musk’s answer to this problem is crypto. Hence, a scenario where X does not bring in crypto is hard to imagine.

“Elon Musk’s crypto policies have always been very positive. I hope he talks about Ethereum or Bitcoin as well,” Bajaj said. Previously, Musk has said that his space venture SpaceX will accept Dogecoin, a cryptocurrency he has been backed for quite some time, as payments. Similarly, Tesla also bought USD 1.5 billion dollars worth of bitcoin and said the car manufacturer will accept payments in crypto.

There are other technocrats

While Musk might appear to be the ‘messiah’ of cryptocurrencies due to his influence, he, or Altman, are not the only technocrats that endorses cryptocurrencies. Jack Dorsey, the founder of Twitter, is actively involved in running his own crypto projects. Earlier this year, Dorsey revealed that his firm Block (previously Square) is developing Bitkey- an actual piece of hardware, where users can transfer their crypto to, empowering them to take control of their bitcoin in a secure, user-friendly way.

However, unlike Altman, Dorsey is a firm believer in decentralisation. After Twitter, he even founded BlueSky, a decentralised social media platform and hence, he does not seem fascinated by the idea of Worldcoin.

"Worldcoin is an attempt at global scale alignment…"
cute https://t.co/VbsX3KcKds

— jack (@jack) July 24, 2023

Interestingly, Worldcoin is not the first platform advocating the philosophy of a universal basic income. EToro founder Yoni Assia said that he was flattered by Worldcoin’s approach and that it emulates his own GoodDollar, universal basic income platform.

Musk, too, like Dorsey, is in favour of a decentralised approach, and called BlueSky an interesting idea. Previously, Musk has also sparked controversy by saying ‘bitcoin is not decentralised’.

The post Why Musk & Altman Want to Revive Crypto appeared first on Analytics India Magazine.

5 Mistakes I Made While Switching to Data Science Career

5 Mistakes I Made While Switching to Data Science Career
Image by Author

I transitioned from technology management to data science because I was interested in the analytical aspect of my previous career. Incorporating IoT (Internet of Things) into business and utilizing various analytical techniques to collect and analyze data was highly valued in the past. I have started learning programming, statistics, and multiple data terminologies to pursue data science.

In this blog, I will share five mistakes that have cost me both time and energy. Additionally, I will provide proposed solutions to avoid making these mistakes in the future.

1. Taking Random Courses

I was trying to learn data science by watching random free courses on YouTube or Coursera, but it only left me feeling more confused. Even though I thought I understood what I was learning, I couldn't solve problems on my own.

After three months of this, I realized I needed a more structured approach. That's why I decided to take the career track from DataCamp. This program includes all the necessary courses to understand the basics of data science. The career track also has guided projects, interactive exercises, and assessment tests that have helped me build more confidence in my analytical abilities.

Solution: Consider enrolling in a paid career track that offers various interactive courses covering both basic and advanced concepts. Many reputable educational platforms provide this option.

2. I Didn't Take Math and Statistics Seriously Enough

You must understand the mathematical basis behind models to maintain your professional reputation. I have experienced embarrassment in job interviews, meetings, and while creating documentation due to this. I vividly recall an interview where an expert asked me about the gradient descent equation, and I was unable to provide an answer. It was then that I realized I needed to revisit and strengthen my understanding of statistical fundamentals.

Solution: It's advisable to take a statistics and probability course and understand how machine learning models work mathematically.

3. Not Documenting my Work

Although I have worked on several projects and Kaggle competitions, I failed to document my progress and achievements. It took me a year to realize the importance of documenting both my project and journey, which can help me secure better job opportunities and build a stronger portfolio. In hindsight, I should have shared my journey on LinkedIn and Medium from the outset. Doing so would have allowed me to make new connections, expand my reach, enhance my professional portfolio, and facilitate collaboration.

Solution: To showcase your project, it's best to share the project elements and code on GitHub. You can also write a blog post about it on Medium and share it with the LinkedIn data science group. This will help you get more exposure.

4. I Applied for the Wrong Jobs

In the past, I applied for all data scientist, data analytics, or business intelligence jobs without researching what companies were seeking. I believed I could effortlessly transition into data science. However, I underestimated the extensive knowledge and skills required for the field. To succeed, it's crucial to remain humble, acknowledge your knowledge gaps, and commit to continuous learning.

To succeed in the industry, it's important to familiarize yourself with the standard practices and acquire relevant skills. If you lack experience, consider seeking internships or contributing to reputable open-source projects.

Solution: Once you've finished the essential course, focus on building a robust data science portfolio. Take the time to research job expectations and requirements, and continuously learn new tools and skills to enhance your resume. Avoid applying for jobs immediately and instead, strive to fully understand what potential employers are looking for in a candidate.

5. Participating in too many Competitions

After discovering some tricks about machine learning, I began participating in competitions on Kaggle. I became addicted, even joining contests without prior knowledge of the topic. I convinced myself that I was learning new techniques from others, but in reality, I was just wasting my time.

As an advocate for learning machine learning through competitions, I must warn newbies that winning is difficult. While I came close and often placed among the top 1%, it did not add any significant value to my career. Instead, I should have focused on real-world projects or sought experience through internships or jobs.

Solution: Don't fool yourself. Always keep your goals in mind. Instead of spreading yourself too thin by participating in too many competitions, consider focusing on a complex open-source project, writing on Medium, building your portfolio, and getting involved in community events.

Final Thoughts

I have made many mistakes, and they have taught me a lot about myself and where I stand. The only thing that kept me going through hard times was the dedication and clear goals. It might take me longer than others, but I wasn't going to quit.

If you feel discouraged and believe that the task at hand is too difficult, I suggest that you explore other options that work for you. Don't let anyone discourage you. Keep trying, and you will eventually find a system that suits you and helps you achieve your dream job. Additionally, it's crucial to work on building your data science portfolio from the beginning by utilizing platforms such as Kaggle, GitHub, DagsHub, and Deepnote.
Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master's degree in Technology Management and a bachelor's degree in Telecommunication Engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.

More On This Topic

  • The Definitive Guide To Switching Your Career Into Data Science
  • 5 Mistakes I Wish I Had Avoided in My Data Science Career
  • 5 Data Science Career Mistakes To Avoid
  • 10 Mistakes You Should Avoid as a Data Science Beginner
  • Most Common SQL Mistakes on Data Science Interviews
  • Is Data Science a Dying Career?

Microsoft Bets Big on Azure Despite its Decline in Revenue Growth 

Highlights of Microsoft Inspire 2022

Recently Microsoft announced its financial results for the fourth quarter of FY23 where it witnessed a slump of 1% for Azure revenue growth from previous quarter, while remaining optimistic about its future investments in Azure cloud.

Microsoft shares fell 4% on Tuesday after hours trading Tuesday night as the tech giant reported Azure and other cloud services revenue growth of 26% year-over-year for the fourth quarter, down 1 percentage point from the quarter prior and 40% from the year-ago quarter.

Microsoft’s revenue beat Wall Street expectations of $55.47 billion as expected by Refinitiv. The company reported a revenue of $56.2 billion up by 8% as compared to the corresponding quarter last year. Meanwhile, net income for the quarter rose 20% to $20.1 billion, or $2.69 per share. Analysts were expecting $2.55 a share.

The revenue from Office commercial products and cloud services witnessed a 12 percent year-over-year increase, driven by a notable 15 percent growth in Office 365 Commercial revenue.

Linkedin, which is Microsoft’s one of the most used professional apps, saw a major decline in revenue growth. Linkedin’s revenue growth was only 5% as compared to 26% in the corresponding quarter last year.

The earnings report for this quarter is crucial as it comes on the backdrop of announcements made by Microsoft on generative AI and its cloud services for enterprises.

“Organizations are asking not only how – but how fast – they can apply this next generation of AI to address the biggest opportunities and challenges they face – safely and responsibly,” said Satya Nadella, the chairman and chief executive officer of Microsoft, in a release announcing the earnings.

“We remain focused on leading the new AI platform shift, helping customers use the Microsoft Cloud to get the most value out of their digital spend and driving operating leverage,” he added.

Microsoft usually does not break out precise quarterly revenue for Azure, its most crucial tool to leverage generative AI endeavors. However, in the earnings call, Microsoft CEO Satya Nadella revealed that Microsoft Cloud surpassed $110 billion in annual revenue, up 27% in constant currency, with Azure all-up accounting for more than 50% of the total for the first time.

What’s next?

Speaking on its future roadmap for the next quarter, CFO Amy Hood said the company will accelerate investment in our cloud infrastructure. Microsoft expects capital expenditures to increase sequentially each quarter through the year as it scales to meet demand signals.

Microsoft expects Azure revenue growth to be 25 to 26% for next quarter (Q1) in constant currency, including roughly 2 points from all Azure AI services.

“Revenue will continue to be driven by Azure which, as a reminder, can have quarterly variability primarily from our per-user business and from in-period revenue recognition depending on the mix of contracts” CFO Amy Hood added.

Microsoft has also recently diversified its AI endeavors across the industry. Notably, the company forged a partnership with Meta, leading to the release of Llama 2 on Azure, the social network’s open-source large language model. This collaboration offers Llama 2 for free, enabling commercial or developer use of the model.

The post Microsoft Bets Big on Azure Despite its Decline in Revenue Growth appeared first on Analytics India Magazine.

NVIDIA DGX Cloud AI Supercomputing Brings AI Training as-a-Service

NVIDIA DGX Cloud for AI 2023
Image: NVIDIA

NVIDIA’s DGX Cloud infrastructure, which lets organizations lease space on supercomputing hardware suitable for training generative AI models, is now generally available. First announced in March, the $36,999 per instance per month service is in competition with NVIDIA’s own $200,000 DGX server. It runs on Oracle Cloud infrastructure and on NVIDIA hardware located in the US and the United Kingdom.

Jump to:

  • What does NVIDIA DGX Cloud do?
  • What makes the NVIDIA DGX Cloud for AI platform work?
  • How access to AI computing is changing

What does NVIDIA DGX Cloud do?

DGX Cloud is a remote-access version of NVIDIA’s hardware, including the thousands of NVIDIA GPUs online on Oracle Cloud Infrastructure.

The DGX AI system is the hardware that ChatGPT trained on in the first place, so NVIDIA has the right pedigree for organizations that want to spin up their own generative AI models. When training ChatGPT, Microsoft linked together tens of thousands of NVIDIA’s A100 graphics chips to get the power it needed; now, NVIDIA wants to make the process much easier — essentially, providing AI training as a service.

Pharmaceutical companies, manufacturers and finance institutions using natural language processing and AI chatbots are among DGX Cloud’s existing customers, NVIDIA said.

Organizations interested in DGX Cloud can apply to sign up.

SEE: ChatGPT is now available as an Android app (TechRepublic).

What makes the NVIDIA DGX Cloud for AI platform work?

Key to the success of the DGX Cloud for AI platform is a high-performance, low-latency fabric that allows workloads to scale across clusters of interconnected systems, enabling multiple instances to perform as if they were all part of one GPU.

The subscription price of $36,999 per instance per month allows an organization to rent space on eight NVIDIA 80GB Tensor Core GPUs for 640GB of GPU memory per node — the supercomputer array — all accessible in a web browser. Customers can manage and monitor the training workloads through the NVIDIA Base Command Platform software dashboard.

“The DGX Cloud user interface (NVIDIA Base Command Platform) lets enterprises rapidly execute and manage model development without having to worry about the underlying infrastructure,” Tony Paikeday, senior director, DGX Platforms at NVIDIA, noted in an email to TechRepublic.

From there, organizations can use NVIDIA AI Enterprise, the software portion of the platform. It provides a library of over 100 end-to-end AI frameworks and pre-trained models, making the development and deployment of production AI relatively straightforward.

Paikeday pointed out that customers already using DGX Cloud have typically chosen it because traditional computing doesn’t provide as many dedicated resources.

Customers want “computational scale and network fabric interconnect that lets them parallelize these very large workloads over many co-resident compute instances operating as a single massive supercomputer,” he said.

How access to AI computing is changing

As generative AI becomes more common, organizations are responding to the demand for changes in the way AI is used, from a publicly trained powerhouse like GPT-4 to private instances in which organizations can use their own data and develop their own proprietary use cases. Access to the heavy-duty computing power needed will change accordingly.

“The availability of NVIDIA DGX Cloud provides a new pool of AI supercomputing resources, with nearly instantaneous access,” said Pat Moorhead, chief analyst at Moor Insights & Strategy, in a press release from NVIDIA.

“Generative AI has made the rapid adoption of AI a business imperative for leading companies in every industry, driving many enterprises to seek more accelerated computing infrastructure,” he said.

“We are at the iPhone moment of AI. Startups are racing to build disruptive products and business models, and incumbents are looking to respond,” said Jensen Huang, founder and CEO of NVIDIA, at the time of the original announcement in March. “DGX Cloud gives customers instant access to NVIDIA AI supercomputing in global-scale clouds.”

Person using a laptop computer.

Subscribe to the Daily Tech Insider Newsletter

Stay up to date on the latest in technology with Daily Tech Insider. We bring you news on industry-leading companies, products, and people, as well as highlighted articles, downloads, and top resources. You’ll receive primers on hot tech topics that will help you stay ahead of the game.

Delivered Weekdays Sign up today

Top 8 Highlights from India’s Google ML Community in 2023

Globally, over 35 Google developer communities have hosted ML Campaigns distributed by the ML Developer Programs team during the first half of 2023 showcasing their work and interests. Out of the many, we’ve picked out our favourite contributions by the Indian community.

Here are the top 8 highlights from Indian Machine Learning Google Developer Experts!

Image Segmentation using Composable Fully-Convolutional Networks

In a demo, Suvaditya Mukherjee provided a detailed account of implementing a fully-convolutional network using a VGG-16 backend and its application in image segmentation.

Furthermore, in the KerasCV for the Young and Restless session he delved into the fundamental computer vision components. The presentation showed the significance of this tool and highlighted its integration with TFX and Keras ecosystem.

[ML Story] My Keras Chronicles

Aritra Roy Gosthipaty shared his journey into the world of deep learning, specifically with Keras. Within his narrative, he offered insights on how aspiring individuals could become a part of the open source community. Alongside him, Subvaditya presented their Keras implementation of Temporal Latent Bottleneck Networks, as originally proposed in the paper.

TensorFlow and Keras Implementation of the CVPR 2023 paper

Usha Rengaraju showcased a research paper’s practical application, bringing to light the implementation of BiFormer – a novel Vision Transformer enriched with Bi-Level Routing Attention.

Skeleton Based Action Recognition: A failed attempt

Ayush Thakur provided insights into his experience of participating in the Kaggle competition, “Google – Isolated Sign Language Recognition“. He shared his repository, training logs, and the various approaches he employed during the competition. Furthermore, Ayush’s enlightening article, “Keras Dense Layer: How to Use It Correctly,” delved into the dense layer in Keras and its practical applications.

Add Machine Learning to your Android App

At Tech Talks for Educators, Pankaj Rai conducted a session attended by 700+ individuals about how to bring ML capabilities into Android applications, such as object detection and gesture recognition. During the session, he explained ML Kit, MediaPipe, and TF Lite tools, demonstrating their capabilities and instructing attendees on how to effectively use these resources.

Google’s Bard Can Write Code

Bhavesh Bhatt showed the coding capabilities of Bard, how to create a 2048 game with it, and how to add some basic features to the game. He also uploaded videos about LangChain in a playlist and introduced Google Cloud’s new course on Generative AI.

In a demonstration, Bhatt showcased the coding capabilities of Bard like how to create a 2048 game using it. Furthermore, he illustrated the process of adding features to enhance the game’s functionality. Alongside this, he shared videos on LangChain and Google Cloud’s new Generative AI course.

Open and Collaborative MLOps

In favour of the open-source community, Sayak Paul spoke about why open and collaboration are two important aspects of MLOps. He gave an overview of Hugging Face Hub and how well it integrates with TFX in MLOps workflows.

During a discussion, Paul spoke about the significance of openness and collaboration in the context of MLOps. He further outlined the merits of Hugging Face Hub, showcasing its integration with TFX.

Learning JAX in 2023: Part 3 — A Step-by-Step Guide to Training Your First Machine Learning Model with JAX

Aritra Roy Gosthipaty and Ritwik Raha together showed how JAX can train linear and nonlinear regression models and the usage of PyTrees library to train a multilayer perceptron model.

The post Top 8 Highlights from India’s Google ML Community in 2023 appeared first on Analytics India Magazine.

Clustering Unleashed: Understanding K-Means Clustering

Clustering Unleashed: Understanding K-Means Clustering
Image by Author

While analyzing the data, the thing in our mind is to find hidden patterns and extract meaningful insights. Let’s enter into the new category of ML-based learning, i.e., Unsupervised learning, in which one of the powerful algorithms to solve the clustering tasks is the K-Means clustering algorithm which revolutionizes data understanding.

K-Means has become a useful algorithm in machine learning and data mining applications. In this article, we will deep dive into the workings of K-Means, its implementation using Python, and exploring its principles, applications, etc. So, let's start the journey to unlock the secret patterns and harness the potential of the K-Means clustering algorithm.

What is the K-Means Algorithm?

The K-Means algorithm is used to solve the clustering problems which belong to the Unsupervised learning class. With the help of this algorithm, we can group the number of observations into K clusters.

Clustering Unleashed: Understanding K-Means Clustering
Fig.1 K-Means Algorithm Working | Image from Towards Data Science

This algorithm internally uses vector quantization, through which we can assign each observation in the dataset to the cluster with the minimum distance, which is the prototype of the clustering algorithm. This clustering algorithm is commonly used in Data mining and machine learning for data partitioning into K clusters based on similarity metrics. Therefore, in this algorithm, we have to minimize the sum of squares distance between the observations and their corresponding centroids, which eventually results in distinct and homogeneous clusters.

Applications of K-means Clustering

Here are some of the standard applications of this algorithm. The K-means algorithm is a commonly used technique in industrial use cases for solving clustering-related problems.

  1. Customer Segmentation: K-means clustering can segment different customers based on their interests. It can be applied to banking, telecom, e-commerce, sports, advertising, sales, etc.
  1. Document Clustering: In this technique, we will club similar documents from a set of documents, resulting in similar documents in the same clusters.
  1. Recommendation Engines: Sometimes, K-means clustering can be used to create recommendation systems. For Example, you want to recommend songs to your friends. You can look at the songs liked by that person and then use clustering to find similar songs and recommend the most similar ones.

There are many more applications that I’m sure you have already thought of, which you probably share in the comments section below this article.

Implementing K-Means Clustering using Python

In this section, we will start implementing the K-Means algorithm on one of the datasets using Python, mainly used in Data Science projects.

1. Import necessary Libraries and Dependencies

First, Let’s import the python libraries we use to implement the K-means algorithm, including NumPy, Pandas, Seaborn, Marplotlib, etc.

import numpy as np  import pandas as pd  import matplotlib.pyplot as plt  import seaborn as sb

2. Load and Analyze the Dataset

In this step, we will load the student dataset by storing that in the Pandas dataframe. To download the dataset, you can refer to the link here.

The complete pipeline of the problem is shown below:

Clustering Unleashed: Understanding K-Means Clustering
Fig. 2 Project Pipeline | Image by Author

df = pd.read_csv('student_clustering.csv')  print("The shape of data is",df.shape)  df.head()

3. Scatter Plot of the Dataset

Now comes the step of modeling is to visualize the data, so we use matplotlib to draw the scatter plot to check how the clustering algorithm works and create different clusters.

# Scatter plot of the dataset  import matplotlib.pyplot as plt  plt.scatter(df['cgpa'],df['iq'])

Output:

Clustering Unleashed: Understanding K-Means Clustering
Fig.3 Scatter Plot | Image by Author

4. Import the K-Means from the Cluster Class of Scikit-learn

Now, as we have to implement the K-Means clustering, we first import the cluster class, and then we have KMeans as the module of that class.

from sklearn.cluster import KMeans

5. Finding the Optimal Value of K using the Elbow Method

In this step, we will find the optimal value of K, one of the hyperparameters, while implementing the algorithm. The K value signifies how many clusters we must create for our dataset. Finding this value intuitively is not possible, so to find the optimal value, we are going to create a plot between WCSS(within-cluster-sum-of-squares) and different K-values, and we have to choose that K, which gives us the minimum value of WCSS.

# create an empty list for store residuals  wcss = []     for i in range(1,11):       # create an object of K-Means class      km = KMeans(n_clusters=i)       # pass the dataframe to fit the algorithm       km.fit_predict(df)       # append inertia value to wcss list      wcss.append(km.inertia_) 

Now, let’s plot the elbow plot to find the optimal value of K.

# Plot of WCSS vs. K to check the optimal value of K  plt.plot(range(1,11),wcss)

Output:

Clustering Unleashed: Understanding K-Means Clustering
Fig.4 Elbow Plot | Image by Author

From the above elbow plot, we can see at K=4; there is a dip in the value of WCSS, which means if we use the optimal value as 4, in that case, the clustering will give you a good performance.

6. Fit the K-Means Algorithm with the Optimal value of K

We are done with finding the optimal value of K. Now, let’s do the modeling where we will create an X array that stores the complete dataset having all the features. There is no need to separate the target and feature vector here, as it is an unsupervised problem. After that, we will create an object of KMeans class with a selected K value and then fit that on the dataset provided. Finally, we print the y_means, which indicates the means of different clusters formed.

X = df.iloc[:,:].values # complete data is used for model building  km = KMeans(n_clusters=4)  y_means = km.fit_predict(X)  y_means

7. Check the Cluster Assignment of each Category

Let’s check which all points in the dataset belong to which cluster.

X[y_means == 3,1]

Till now, for centroid initialization, we have used the K-Means++ strategy, now, let’s initialize the random centroids instead of K-Means++ and compare the results by following the same process.

km_new = KMeans(n_clusters=4, init='random')  y_means_new = km_new.fit_predict(X)  y_means_new

Check how many values match.

sum(y_means == y_means_new)

8. Visualizing the Clusters

To visualize each cluster, we plot them on the axes and assign different colors through which we can easily see 4 clusters formed.

plt.scatter(X[y_means == 0,0],X[y_means == 0,1],color='blue')  plt.scatter(X[y_means == 1,0],X[y_means == 1,1],color='red')    plt.scatter(X[y_means == 2,0],X[y_means == 2,1],color='green')   plt.scatter(X[y_means == 3,0],X[y_means == 3,1],color='yellow')

Output:

Clustering Unleashed: Understanding K-Means Clustering
Fig. 5 Visualization of Clusters Formed | Image by Author

9. K-Means on 3D-Data

As the previous dataset has 2 columns, we have a 2-D problem. Now, we will utilize the same set of steps for a 3-D problem and try to analyze the code reproducibility for n-dimensional data.

# Create a synthetic dataset from sklearn  from sklearn.datasets import make_blobs # make synthetic dataset  centroids = [(-5,-5,5),(5,5,-5),(3.5,-2.5,4),(-2.5,2.5,-4)]  cluster_std = [1,1,1,1]  X,y = make_blobs(n_samples=200,cluster_std=cluster_std,centers=centroids,n_features=3,random_state=1)
# Scatter plot of the dataset  import plotly.express as px  fig = px.scatter_3d(x=X[:,0], y=X[:,1], z=X[:,2])  fig.show()

Output:

Clustering Unleashed: Understanding K-Means Clustering
Fig. 6 Scatter Plot of 3-D Dataset | Image by Author

wcss = []  for i in range(1,21):      km = KMeans(n_clusters=i)      km.fit_predict(X)      wcss.append(km.inertia_)    plt.plot(range(1,21),wcss)

Output:

Clustering Unleashed: Understanding K-Means Clustering
Fig.7 Elbow Plot | Image by Author

# Fit the K-Means algorithm with the optimal value of K  km = KMeans(n_clusters=4)  y_pred = km.fit_predict(X)
# Analyse the different clusters formed  df = pd.DataFrame()  df['col1'] = X[:,0]  df['col2'] = X[:,1]  df['col3'] = X[:,2]  df['label'] = y_pred    fig = px.scatter_3d(df,x='col1', y='col2', z='col3',color='label')  fig.show()

Output:

Clustering Unleashed: Understanding K-Means Clustering
Fig.8. Clusters Visualization | Image by Author

You can find the complete code here — Colab Notebook

Wrapping it Up

This completes our discussion. We have discussed the K-Means working, implementation, and applications. In conclusion, implementing the clustering tasks is a widely used algorithm from the class of unsupervised learning that provides a simple and intuitive approach to grouping the observations of a dataset. The main strength of this algorithm is to divide the observations into multiple sets based on the selected similarity metrics with the help of the user who is implementing the algorithm.

However, based on the selection of centroids in the first step, our algorithm behaves differently and converges to local or global optima. Therefore, selecting the number of clusters to implement the algorithm, preprocessing the data, handling outliers, etc., is crucial to obtain good results. But if we observe the other side of this algorithm behind the limitations, K-Means is a helpful technique for exploratory data analysis and pattern recognition in various fields.
Aryan Garg is a B.Tech. Electrical Engineering student, currently in the final year of his undergrad. His interest lies in the field of Web Development and Machine Learning. He have pursued this interest and am eager to work more in these directions.

More On This Topic

  • What is K-Means Clustering and How Does its Algorithm Work?
  • Centroid Initialization Methods for k-means Clustering
  • Key Data Science Algorithms Explained: From k-means to k-medoids clustering
  • What is Clustering and How Does it Work?
  • Choosing the Right Clustering Algorithm for Your Dataset
  • Mastering Clustering with a Segmentation Problem

This MIT team is fighting malicious AI image manipulation a few pixels at a time

screenshot-9-cropped.png

As AI image creation and editing becomes more prevalent, a new digital privacy concern has arisen — the unauthorized AI editing of someone's artwork or picture. To date, there's nothing to stop someone from taking a picture online, uploading it to an AI program, and manipulating it for all sorts of purposes.

But a new technique from a team at MIT could change that.

Also: The best AI image generators to try

Simply called "PhotoGuard," the method involves a deep understanding of the algorithms that AI operates on. With that understanding, the team developed ways to very subtly change a picture, disrupting how AI interprets it. And if AI can't understand an image, it can't edit it.

"At the core of our approach," the MIT team explained in a paper on their project, "is the idea of image immunization — that is, making a specific image resistant to AI-powered manipulation by adding a carefully crafted (imperceptible) perturbation to it."

PhotoGuard works by altering a few select pixels in each image in such a way that AI sees things that aren't there. These changes aren't visible to the human eye, but they're blindingly bright to AI. When the AI sees the edited pixels, it overestimates their importance and edits the image to those pixels instead of the rest of the image.

Also: How to use ChatGPT: Everything you need to know

To test their results, the MIT team took 60 images and generated AI edits using various prompts — both on immunized and non-immunized versions of the same image. Once the new image was created, they used several metrics to determine how similar the edits were. The end result? In a test of 60 images, the team found that edits of immunized images were "noticeably different from those of non-immunized images."

Of course, the method isn't foolproof. If someone wanted badly enough, they could still maliciously edit an image — perhaps by cropping a photo until they cut out the pixel causing trouble or by simply applying a filter to the image. Still, this presents a significant hurdle that could deter a lot of people.

And while this method is effective against this generation of AI, that doesn't necessarily mean it will be in the future. That's why PhotoGuard creators encourage growth in this area not just through technical methods, but through "collaboration between organizations that develop large diffusion models, end-users, as well as data hosting and dissemination platforms."

Also: Generative AI is coming for your job. Here are 4 reasons to get excited

Right now, PhotoGuard is simply a technique. There's no software available to the public, and the creator admits that there's a lot of work to do for this to be practical and available to the general public. Still, this is a step forward to guard against new threats from AI, the MIT team says, and a sign that companies need to invest in the fight.

Artificial Intelligence