Accenture, a leading global professional services company, has recently made significant leadership changes in India. The company has announced the appointment of Ajay Vij to the newly created role of country managing director, while Sandeep Dutta has been tasked with leading Accenture’s India market unit. Meanwhile, Rekha M. Menon, the senior managing director and chairperson for Accenture in India is set to retire from her position as of June 30.
Vij, who previously served as corporate services and sustainability lead for India, will now expand his responsibilities to provide overall leadership and drive coordinated decision-making for key company priorities. He will assume the critical role of country managing director, which is an important position within Accenture India. In his new role, Vij will be responsible for overseeing the company’s operations in the country, managing client relationships, and driving growth strategies to help Accenture maintain its market leadership.
Dutta, on the other hand, will be responsible for leading Accenture’s India market unit, serving as the company’s business lead in the domestic market. With his extensive experience in sales and business development, Dutta will be responsible for driving business and operations in India, focusing on growth, market differentiation, and clients. In addition, he will work closely with local business communities and represent Accenture with local industry and trade bodies.
Rekha M. Menon, who is set to retire after a successful 20-year career at Accenture, has played a significant role in growing the company’s business, strengthening its presence in communities, and building relationships with industry, government, and clients. As chairperson, she has been an active advocate of inclusion and diversity and has pioneered the India corporate citizenship strategy. Her departure marks a significant moment for the company, and her legacy will undoubtedly be felt by her colleagues and industry peers for years to come.
Overall, these leadership changes are designed to help Accenture continue to thrive in the Indian market, which is critical to the company’s global success. The new appointees bring a wealth of experience and expertise to their respective roles and are expected to provide strong leadership, drive growth, and foster innovation in the company’s operations in India.
Accenture Bets Big on AI
Accenture recently announced its plans to acquire Flutura, an AI company based in Bangalore and Houston, to increase performance in plants, refineries, and supply chains.
Recently, Salesforce recently announced plans to collaborate with Accenture to accelerate the deployment of generative AI for CRM. Together, the companies intend to establish an acceleration hub for generative AI that provides organisations with the technology and experience they need to scale Einstein GPT – Salesforce’s generative AI for CRM — helping to increase employee productivity and transform customer experiences.
Accenture has also tied up with Adobe to help enterprise marketers optimise their content supply chains to drive personalised customer experiences at scale. By combining Accenture’s experience in process improvement and change management with Adobe’s content supply chain solution, the collaboration will help marketers efficiently produce personalised, dynamic, and optimised content, thus remaining relevant amid uncertainty. The new services bring together the people, tools, and workstreams needed for clients to plan, create, manage, and deliver content across industries and around the globe
The post Accenture Appoints New Leadership Roles in India appeared first on Analytics India Magazine.
The Google Bard experimental chatbot gives IT pros the opportunity to explore generative artificial intelligence capabilities. IT pros who treat Google Bard as an exploration engine — i.e., a way to gather and explore information and generate relevant text — may significantly shorten the time it takes to research and produce paragraphs about all sorts of products, processes and topics.
After learning how Bard differs from Google Search and other chatbots, follow four tips for getting the most out of Bard. Note: Most users can benefit from these tips, though the examples are focused on IT folks.
Jump to:
How does Bard differ from Google search?
How does Google Bard differ from similar chatbots?
How to experiment with Google Bard
Tip 1: Specify context and details in your Google Bard prompt
Tip 2: Prompt Google Bard with a number
Tip 3: Prompt Google Bard for additional details
Tip 4: Verify Google Bard’s response
How does Bard differ from Google search?
The Bard chat experience works differently than Google search. While a Google search query relies on a single set of keywords, Bard accepts prompts in natural language: You enter a sentence or full paragraph to convey what you want. And, while google.com treats every search as a single, one-time query, bard.google.com allows you to delve successively deeper with added prompts that refer to and expand on prior responses.
SEE: How to get better search results in Google (TechRepublic)
How does Google Bard differ from similar chatbots?
Google Bard is internet connected unlike many chatbots that can only respond with historical content gathered through 2021. Ask Google Bard about current events, weather or sports, and the system should provide an accurate response.
SEE: Google’s Bard is an AI rival to ChatGPT.
But where a search engine delivers either an answer or a list of links, Google Bard might be considered an exploration engine you use to move a few steps beyond the results of a simple search.
While this sort of system can support many types of tasks, IT leaders might find Google Bard especially useful to help identify solutions, suggest sequences (e.g., “Can you provide a 4-step process to improve security?”) and draft text. Bard can generate each of these as well as create suggested communications and presentation outlines.
How to experiment with Google Bard
To experiment with Bard, you need to go to bard.google.com and join a waitlist with an individual Google account. As of May 2023, access to Bard remains available only to individual accounts, not organizational Google Workspace accounts.
After Google grants you access, you may chat with Bard while signed in to your Google account in a browser at bard.google.com.
Tip 1: Specify context and details in your Google Bard prompt
Google Bard may be useful in any sort of hardware or software selection process, but make sure to add context and details to your prompt that you might otherwise omit when talking to a colleague. For example, if you were to ask a colleague to research office printer options, you might note that things like speed, color and duty cycle, also known as the number of pages per month, are all important. Note these key features in your request, just as you would with a colleague:
“Suggest a few printers that print in color, at a speed of at least 30 pages per minute, that also have a duty cycle of 7,500 pages per month.”
You also will want to provide Bard context that your colleague might reasonably be expected to assume such as that you are selecting a printer for an office environment (Figure A) or used by an individual:
“I am trying to select a printer to be used in a branch office of a global company, where it will be shared by a team of people. Can you suggest a few printers that print in color, at a speed of at least 30 pages per minute, that also have a duty cycle of 7,500 pages per month?”
Figure A
Specify context and details, such as the “printer to be used in a branch office of a global company” in this Google Bard prompt. Note that for some responses, Bard offers three differently formatted drafts, as circled in red in this example.
Not sure what sort of context is needed? Review the standard set of reporting questions — who, what, when, why, where and how — and consider the answers to each of those as you craft your prompt. This type of context can help the system respond with increasingly relevant content.
Tip 2: Prompt Google Bard with a number
When possible, provide Google Bard a target number — of words, items or steps in a sequence — to nudge the system toward a preferred level of detail and scope. Seeking a smaller number of items may get you a focused list, but a larger number leaves you with more options to investigate.
When you replace the X with 20 (Figure B, right), you’ll get a much longer list than when you request only 5 (Figure B, left) in the following prompt:
Figure B
Prompt Google Bard to provide a target number of words, items or steps to obtain a desired level of detail such as 5 solutions (left) or 20 solutions (right). Greater numbers give you more words, items and steps to explore, while fewer numbers result in a more concise, focused response.
“Can you suggest X software solutions that might work well to help us manage customer data for an organization of 100 people that has 2,000 customers?”
The same is true for sequences: A lower number of steps results in a more condensed description of a process. For example, try the following prompt with a range of steps between 5 and 10:
“In X steps, describe the process needed to deploy security keys for multifactor authentication at an organization of 200 people.”
With 3 steps, the descriptions are broad and general. As the number increases, each step in the returned text tends to be more specific.
Tip 3: Prompt Google Bard for additional details
After the initial response, you may continue the chat to seek additional details (Figure C). You might frequently ask for information, such as:
“How much do each of those cost?”
“Can you create a project calendar for this?”
“Can you suggest other options?”
“Tell me more about X.”
Figure C
Prompt Google Bard for additional details as desired. An earlier prompt in this thread had inquired about using security keys for Google Workspace. The later prompt shown uses a simple “need a detailed plan for this” to refer to the prior inquiry. The chat-like stream of queries results in a dramatically different experience than standard keyword search.
Remember, you may ask complex questions, so you could request “Tell me more about Item1, Item2 and Item3” to learn more about three items at once. You can also use words like “this” (as shown in Figure C) to refer to an earlier chat request.
In a few cases, the system may pause and display only items 1 to 23 out of a requested 25 items list, for example. If that happens, try a “More?” or “Continue” prompt. Often, it will then complete the requested task and provide additional content.
Tip 4: Verify Google Bard’s response
Take the time to verify, correct and edit any response you receive from Bard — and every other large language model-driven chat system, for that matter. Google states that Bard is experimental; even if it were not, information on the internet isn’t always entirely complete, current or correct. That may seem obvious, but far too few people rigorously review, test and accurately evaluate online content.
Let me know how you use Google Bard for work by either messaging or mentioning me on Mastodon (@awolber).
Google Weekly Newsletter
Learn how to get the most out of Google Docs, Google Cloud Platform, Google Apps, Chrome OS, and all the other Google products used in business environments.
One of the more intriguing discoveries about ChatGPT is that it can write pretty good code. I tested this out in February when I asked it to write a WordPress plugin my wife could use on her website. It did a fine job, but it was a very simple project.
How can you use ChatGPT to write code as part of your daily coding practice? That's what we're going to explore here.
What types of coding can ChatGPT do well?
There are two important facts about ChatGPT and coding. The first is that it can, in fact, write useful code. The second is that it can get completely lost, fall down the rabbit hole, chase its own tail, and produce absolutely unusable garbage.
Also:I'm using ChatGPT to help me fix code faster, but at what cost?
I found this out the hard way. After I finished the WordPress plugin, I decided to see how far ChatGPT could go. I wrote out a very careful prompt for a Mac application, including detailed descriptions of user interface elements, interactions, what would be provided in settings, how they would work, and so on. Then I fed it to ChatGPT.
ChatGPT responded with just a flood of text and code. Then it stopped mid-code. When I asked it to continue, it vomited out even more code and text. I requested continue after continue and it dumped out more and more code. But… none of it was usable. It didn't identify where the code should go, how to construct the project, and — when I looked carefully at the code produced — it left out major operations I requested, leaving in simple text descriptions stating "program logic goes here."
Also: Okay, so ChatGPT just debugged my code. For real.
After a bunch of repeated tests, it became clear to me that if you ask ChatGPT to deliver a complete application, it will fail. A corollary to this observation is that if you know nothing of coding and want ChatGPT to build you something, it will fail.
Where ChatGPT succeeds, and does so very well, is helping someone who already knows how to code to build specific routines and get specific tasks done. Don't ask for an app that runs on the menu bar. But if you ask ChatGPT for a routine to put a menu on the menu bar, and then paste that into your project, it will go quite well.
Also: How to use ChatGPT to create an app
Also keep in mind that while it appears ChatGPT has a tremendous amount of domain-specific knowledge (and it often does) it lacks wisdom. As such, it may be able to write code, but it won't be able to write code containing the nuances for very specific or complex problems that requires deep experience to understand.
Use ChatGPT to demo techniques, write small algorithms, and write subroutines. You can even get ChatGPT to help you break down a bigger project into chunks, and then you can ask it to help you code those chunks.
Also: How does ChatGPT work?
So, with that in mind, let's look at some specific steps for how to use ChatGPT to write code.
How ChatGPT can help you write code
I started with a prompt that was designed to elicit information about what libraries would provide the functionality I wanted. A library (for those of you reading along who aren't programmers) is a body of code a programmer can access that does a lot of the heavy lifting for a specific purpose. A big part of modern programming is finding and choosing the right libraries. So this is a good starting point.
Also:I used ChatGPT to write the same routine in 12 top programming languages. Here's how it did
In this case, I'm looking at blocks of code written by other people that will summarize text. Here's my first prompt:
Describe ten different open source AI libraries (and the languages they work with) that I can use to generate a summary of the main core contents of any web page, ignoring any ads or embedded materials.
This gave me exactly what I wanted, including a mention of OpenAI's offerings. I think OpenAI would do great here, but for this hypothetical project, don't want to budget for API fees. So I'll narrow down the question.
Are any of these free?
ChatGPT hedged its bets with its answer. Here's what it said: "Yes, all ten of these AI libraries are open source and free to use. However, some of them may have usage limits or require payment for access to additional features or resources." So, based on that, I clarified my query.
Which of these libraries have no usage limits and don't require any additional payment or licensing
Notice how this is very much a conversation. I don't have to re-ask the originating question. I'm just drilling down in the same way I might if I had an expert at hand and was seeking clarification. In this case, ChatGPT gave me eight library choices, but none of them mentioned the PHP language that I was planning to code in. So, here's the next prompt:
Of those 8 libraries, can I use any with PHP?
It returned three libraries, but I wasn't sure about what each did. So another question:
What's the difference between Sumy, Gensim, and NLTK?
I still wasn't sure, so I clarified my use plan and then asked:
If I want to create summaries of web page news articles, which library would work better?
The answer I got was clear and promising: "Sumy is specifically designed for text summarization, which is the task of creating a summary that captures the most important information from a piece of text." So, now it was time to see what was involved in using it with PHP. I asked my last question for this part of the project:
Can you explain how to use Sumy from PHP?
Feel free to play along on your computer and paste these prompts into your instance of ChatGPT. Notice that, in step 1, I decided what program module I was going to get help on. Then, in this step, I had a conversation with ChatGPT to decide what library to use and how to integrate it into my project.
Also:The best AI chatbots: ChatGPT and other interesting alternatives to try
That may not seem like programming, but I assure you it is. Programming isn't just blasting lines of code onto a page. Programming is figuring out how to integrate all the various resources and systems together, and how to talk to all the various components of your solution. Here, ChatGPT helped me do that integration analysis.
By the way, I was curious whether Google's Bard could help in the same way. Bard can't actually write code, but it did give some extra insights into the planning aspect of programming over ChatGPT's responses. So don't hesitate to use multiple tools to triangulate on answers you want. Here's that story: Bard vs. ChatGPT: Can Bard help you code? Since that article, Google added some coding capabilities to Bard. But they're not all that great. Here, read about it: I tested Google Bard's new coding skills. It didn't go well.
Coding is next.
That means you have to do it yourself. As we know, the first draft of a piece of code is rarely the final code. So even if you were to expect ChatGPT to generate final code, it would really be a starting point, one where you need to take it to completion, integrate it into your bigger project, test it, refine it, debug it, and so on.
Also:I asked ChatGPT to write a short Star Trek episode. It actually succeeded
But that doesn't mean the example code is worthless. Far from it. Let's take a look at a prompt I wrote based on the project I described earlier. Here's the first part:
Wite a PHP function called summarize_article.
As input, summarize_article will be passed a URL to an article on a news-related site like ZDNET.com or Reuters.com.
I'm telling ChatGPT the programming language it should use. I'm also telling it the input but, while doing so, providing two sites as samples to help ChatGPT understand the style of article. Honestly, I'm not sure ChatGPT didn't ignore that bit of guidance. Next, I'll tell it how to do the bulk of the work:
Inside summarize_article, retrieve the contents of the web page at the URL provided. Using the library Sumy from within PHP and any other libraries necessary, extract the main body of the article, ignoring any ads or embedded materials, and summarize it to approximately 50 words. Make sure the summary consists of complete sentences. You can go above the 50 words to finish the last sentence, if necessary.
This is very similar to how I'd instruct an employee. I'd want that person to know that they weren't only restricted to Sumy. If they needed another tool, I wanted them to use it.
Also:Want to learn more about prompt engineering? This free course from OpenAI can help
I also specified an approximate number of words to create bounds for what I wanted as a summary. A later version of the routine might take that number as a parameter. I then ended by saying what I wanted as a result:
Once processing is complete, code summarize_article so it returns the summary in plain text.
The resulting code is pretty simple. ChatGPT did call on another library (Goose) to retrieve the article contents. It then passed that to Summy with a 50-word limit, and then returned the result. That's it. But once the basics are written, it's a mere matter of programming to go back in and add tweaks, customize what's passed to the two libraries, and deliver the results.
One interesting point of note. ChatGPT created a sample call to the routine it wrote, using a URL from after 2021 (when ChatGPT's dataset ends).
I checked that URL against both Reuters' site and the Wayback Machine, and it doesn't exist. ChatGPT just made it up.
FAQ
Does ChatGPT replace programmers?
Not now, or at least not yet. ChatGPT programs at the level of a talented first-year programming student, but it's lazy (like that first-year student). It might reduce the need for very entry-level programmers, but at its current level, I think it will just make life easier for entry-level programmers (and even programmers with more experience) to write code and look up information. It's definitely a time-saver, but there are few programming projects it can do on its own — at least now. In 2030? Who knows.
How do I get coding answers in ChatGPT?
Just ask it. You saw above how I used an interactive discussion dialog to narrow down the answers I wanted. When you're working with ChatGPT, don't expect one question to magically do all your work for you. But use ChatGPT as a helper and resource, and it will give you a lot of very helpful information. Of course, test that information — because, as John Schulman, a cofounder of OpenAI, says, "Our biggest concern was around factuality, because the model likes to fabricate things."
What programming languages does ChatGPT know?
Most of them. I got very side-tracked trying this. I tested common modern languages, like PHP, Python, Java, Kotlin, Swift, C#, and more. But then I had it write code in obscure dark-ages languages like COBOL, Fortran, Forth, LISP, ALGOL, RPG (the report program generator, not the role-playing game), and even IBM/360 assembly language.
As the icing on the cake, I gave it this prompt:
Write a sequence that displays 'Hello, world' in ascii blinking lights on the front panel of a PDP 8/e
The PDP 8/e was my very first computer, and ChatGPT actually gave me instructions for toggling in a program using front panel switches. I was impressed, gleeful, and ever so slightly afraid.
Also:How to use ChatGPT to summarize a book, article, or research paper
So what's the bottom line? Honestly, it's that ChatGPT can be a very helpful tool. Just don't ascribe superpowers to it. Yet.
You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.
Computer vision (CV) has reached 99% accuracy from 50% within 10 years. The technology is expected to improve further to an unprecedented level with modern algorithms and image segmentation techniques. Recently, Meta's FAIR lab has released the Segment Anything Model (SAM) – a game-changer in image segmentation. This advanced model can produce detailed object masks from input prompts, taking computer vision to new heights. It can potentially revolutionize how we interact with digital technology in this era.
Let's explore image segmentation and briefly uncover how SAM impacts computer vision.
What is Image Segmentation & What Are its Types?
Image segmentation is a process in computer vision that divides an image into multiple regions or segments, each representing a different object or area of the image. This approach allows experts to isolate specific parts of an image to obtain meaningful insights.
lmage segmentation models are trained to improve output by recognizing important image details and reducing complexity. These algorithms effectively differentiate between different regions of an image based on features such as color, texture, contrast, shadows, and edges.
By segmenting an image, we can focus our analysis on the regions of interest for insightful details. Below are different image segmentation techniques.
Semantic segmentation involves labeling pixels into semantic classes.
Instance segmentation goes further by detecting and delineating each object in an image.
Panoptic segmentation assigns unique instance IDs to individual object pixels, resulting in more comprehensive and contextual labeling of all objects in an image.
Segmentation is implemented using image-based deep learning models. These models fetch all the valuable data points and features from the training set. Then, turn this data into vectors and matrices to understand complex features. Some of the widely used deep learning models behind image segmentation are:
Convolutional Neural Networks (CNNs)
Fully Connected Networks (FCNs)
Recurrent Neural Networks (RNNs)
How Image Segmentation Works?
In computer vision, most image segmentation models consist of an encoder-decoder network. The encoder encodes a latent space representation of the input data which the decoder decodes to form segment maps, or in other words, maps outlining each object’s location in the image.
Usually, the segmentation process consists of 3 stages:
An image encoder that transforms the input image into a mathematical model (vectors and matrices) for processing.
The encoder aggregates the vectors at multiple levels.
A fast mask decoder takes the image embeddings as input and produces a mask that outlines different objects in the image separately.
The State of Image Segmentation
Starting in 2014, a wave of deep learning-based segmentation algorithms emerged, such as CNN+CRF and FCN, which made significant progress in the field. 2015 saw the rise of the U-Net and Deconvolution Network, improving the accuracy of the segmentation results.
Then in 2016, Instance Aware Segmentation, V-Net, and RefineNet further improved the accuracy and speed of segmentation. By 2017, Mark-RCNN and FC-DenseNet introduced object detection and dense prediction to segmentation tasks.
In 2018, Panoptic Segmentation, Mask-Lab, and Context Encoding Networks were at the center of the stage as these approaches addressed the need for instance-level segmentation. By 2019, Panoptic FPN, HRNet, and Criss-Cross Attention introduced new approaches for instance-level segmentation.
In 2020, the trend continued with the introduction of Detecto RS, Panoptic DeepLab, PolarMask, CenterMask, DC-NAS, and Efficient Net + NAS-FPN. Finally, in 2023, we have SAM, which we will discuss next.
Segment Anything Model (SAM) – General Purpose Image Segmentation
Image source
The Segment Anything Model (SAM) is a new approach that can perform interactive and automatic segmentation tasks in a single model. Previously, interactive segmentation allowed for segmenting any object class but required a person to guide the method by iteratively refining a mask.
Automatic segmentation in SAM allows the segmentation of specific object categories defined ahead of time. Its promotable interface makes it highly flexible. As a result, SAM can address a wide range of segmentation tasks using a suitable prompt, such as clicks, boxes, text, and more.
SAM is trained on a diverse and insightful dataset of over 1 billion masks, making it possible to recognize new objects and images unavailable in the training set. This modern framework will widely revolutionize the CV models in applications like self-driving cars, security, and augmented reality.
SAM can detect and segment objects around the car in self-driving cars, such as other vehicles, pedestrians, and traffic signs. In augmented reality, SAM can segment the real-world environment to place virtual objects in appropriate locations, creating a more realistic and engaging UX.
Image Segmentation Challenges in 2023
The increasing research and development in image segmentation also bring significant challenges. Some of the foremost image segmentation challenges in 2023 include the following:
The increasing complexity of datasets, especially for 3D image segmentation
The development of interpretable deep models
The use of unsupervised learning models that minimize human intervention
The need for real-time and memory-efficient models
Eliminating the bottlenecks of 3D point-cloud segmentation
The Future of Computer Vision
The global computer vision market impacts multiple industries and is projected to reach over $41 billion by 2030. Modern image segmentation techniques like the Segment Anything Model coupled with other deep learning algorithms will further strengthen the fabric of computer vision in the digital landscape. Hence, we'll see more robust computer vision models and intelligent applications in the future.
To learn more about AI and ML, explore Unite.ai – your one-stop solution to all queries about tech and its modern state.
OpenAI is here with yet another project. This time it’s Shap-E, a conditional generative model for 3D assets. The paper reads that unlike other 3D generative models that produce a single output representation, Shape-E can directly generate the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields (NeRF) with single text prompts.
Among the few open-source offerings by OpenAI, Shap-E is open-source and available with the model weights, inference code, and sample on GitHub.
You can find the GitHub repository for Shape-E here.
According to the paper, Shap-E is trained in two stages. Firstly, an encoder is trained, which maps 3D assets deterministically into the parameters of an implicit function. Secondly, a conditional diffusion model is trained on the encoder’s outputs.
“Our models can generate complex and diverse 3D assets in just a few seconds when trained on a large dataset of paired 3D and text data,” reads the paper.
The interesting part about OpenAI’s Shap-E is that despite modeling a higher-dimensional, multi-representation output space, Shap·E converges faster and produces comparable or better sample quality than Point·E.
Though the created 3D objects might look pixelated and rough, the models can be generated with a single text. Another limitation that is included with this is currently, it is only capable of producing objects with single object prompts and simple attributes, and struggles to find multiple attributes, as pointed out in the paper.
Recently, OpenAI had also released Point-E, which was touted as a 3D DALL-E 2. The same diffusion technique used in DALL-E and Point-E is also leveraged in Shape-E. But this time, instead of point cloud diffusion with Point-E, now users can generate NeRF capable textured meshes.
The post OpenAI Releases Shap-E, NeRF Enabled Generative Model for 3D Assets appeared first on Analytics India Magazine.
The newest plague — or partner — of educational systems is ChatGPT. News outlets are scrambling to gather teachers’ opinions on whether this phenomenon should be embraced or discarded because of how much it could change the educational landscape. How are students leveraging this tool for their studies? How are teachers supposed to react to ChatGPT in education?
How Is ChatGPT a Friend?
Part of the beauty is most people can learn how to use ChatGPT with simple experimentation or a YouTube tutorial, unless someone’s looking for more in-depth ways to bank on its resources. That’s its first boon, but several others could make it a valuable educational tool.
There’s More Inclusivity
Nontraditional learners could get more out of tools like ChatGPT than mainstream methods. It could be an audio-visual assistant where students can freely ask as many clarifying questions as necessary without judgment. Teachers juggling countless individualized education plans could also take advantage of ChatGPT by asking how to curate lesson plans for students with disabilities or other learning requirements.
Students Prepare for Workforces
Almost every sector embraces some form of AI and many students will grow up to work alongside AI coworkers or assets. Because the transition is inevitable, schools must embrace AI to prepare children for the real world adequately. It will empower students heading into the workforce if they know how to use chatbots responsibly. Plus, learners going into tech-reliant industries need these skills if they’re going to qualify for work.
Learners Get a Free Tutor
ChatGPT in education could look like students asking to provide examples of covalent bonds or metaphors in literature. It could simplify complex concepts, giving step-by-step instructions when a classroom may not have the attention or time to devote to reiterating every idea in minute detail.
Students can leverage ChatGPT as a tutor or homework supplement, especially if they need to catch up. ChatGPT’s ability to make curated responses is unparalleled, so if a student needs a scientific explanation for a sixth-grade reading level, ChatGPT can adapt.
How Is ChatGPT a Foe?
These benefits do come with combatting drawbacks. Teachers aren’t crying out to ban ChatGPT without cause. These are some of the most pervasive.
Students Are Cheating
ChatGPT could write essays or code with relative accuracy, whether for single assignments or entire classes. It could encourage lazy or uninterested students to coast through courses without effort, much like how foreign language learners exploited Google Translate when it first came out.
Data Privacy Is Up in the Air
This AI is making cybersecurity analysts curious. ChatGPT security isn’t the toughest, but students and teachers input countless data daily into its database. Are educational systems liable if threat actors compromise that information? Is it the educators’ responsibility to teach their subjects and cybersecurity hygiene on top of it, especially if they encourage using AI in the classroom? Cloud-based and public systems like this could have improved cybersecurity and compliance, but how sure can teachers be?
Critical Thinking Is in Jeopardy
Students could use ChatGPT to unleash their creative potential, but it’s just as likely to harm critical thinking abilities in the long run. Why would students need to exercise problem-solving skills if they can ask ChatGPT to decide for them? A want for instant gratification may take over the genuine curiosity to learn that was necessary for students who had to expend more effort to come up with the solutions they needed.
What About Teachers?
ChatGPT in education impacts teachers much differently than students, but they will have just as many — if not more — opportunities and negatives to the experience. Some may argue teachers are responsible for incorporating AI in the classroom to modernize lesson plans and make education relevant to modern job expectations.
Adversely, teachers will spend a lot more time in quality control when grading assignments — even though they could use it to save hours from manual grading. The time-saving possibilities are as creative as the teacher. Lesson plans become more engaging and diverse with a few questions posed to ChatGPT.
Ultimately, banning or allowing ChatGPT in the classroom will set a precedent for teacher-student relationships with technology. Teachers calling for outright bans insinuate they can’t trust children to use ChatGPT for genuine educational purposes. Is it healthy for teachers to distrust students this way? Alternatively, is it beneficial for teachers to constantly question if students abuse their trust? Both sides pose a puzzling ethical question educators don’t have an answer to yet.
What Side Is ChatGPT On?
Determining whether or not ChatGPT in education is a friend or foe will fall onto how teachers instruct with it and set precedents for AI etiquette. Regardless, it’s indisputable that AI will eventually integrate into education.
Is it best to delay that shift or start working on managing student-AI relationships now? Depending on a student’s priorities and character, it has equal potential to be destructive or advantageous. The world will have to see which side of the scale ChatGPT falls toward in time.
Shannon Flynn (@rehackmagazine) is a technology blogger who writes about IT trends, cybersecurity, and business tech news. She's also a staff writer at MakeUseOf and is the Managing Editor at ReHack.com. Follow KDnuggets to read more from Shannon and other data science updates. See Shannon's personal website for more info.
More On This Topic
Why SQL Will Remain the Data Scientist's Best Friend
Visual ChatGPT: Microsoft Combine ChatGPT and VFMs
Critical questions still need to be addressed about the use of generative artificial intelligence (AI), so businesses and consumers keen to explore the technology must be mindful of potential risks.
As it's currently still in its experimentation stage, businesses will have to figure out the potential implications of tapping generative AI, says Alex Toh, local principal for Baker McKenzie Wong & Leow's IP and technology practice.
Also: How to use the new Bing (and how it's different from ChatGPT)
Key questions should be asked about whether such explorations continue to be safe, both legally and in terms of security, says Toh, who is a Certified Information Privacy Professional by the International Association of Privacy Professionals. He also is a certified AI Ethics and Governance Professional by the Singapore Computer Society.
Amid the increased interest in generative AI, the tech lawyer has been fielding frequent questions from clients about copyright implications and policies they may need to implement should they use such tools.
One key area of concern, which is also heavily debated in other jurisdictions, including the US, EU and UK, is the legitimacy of taking and using data available online to train AI models. Another area of debate is whether creative works generated by AI models, such as poetry and painting, are protected by copyright, he tells ZDNET.
Also: How to use DALL-E 2 to turn your creative visions into AI-generated art
There are risks of trademark and copyright infringement if generative AI models create images that are similar to existing work, particularly when they are instructed to replicate someone else's artwork.
Toh says organizations want to know the considerations they need to take into account if they explore the use of generative AI, or even AI in general, so the deployment and use of such tools does not lead to legal liabilities and related business risks.
He says organizations are putting in place policies, processes, and governance measures to reduce risks they may encounter. One client, for instance, asked about liabilities their company could face if a generative AI-powered product it offered malfunctioned.
Toh says companies that decide to use tools such as ChatGPT to support customer service via an automated chatbot, for example, will have to assess its ability to provide answers the public wants.
Also: How to make ChatGPT provide sources and citations
The lawyer suggests businesses should carry out a risk analysis to identify the potential risks and assess whether these can be managed. Humans should be tasked to make decisions before an action is taken and only left out of the loop if the organization determines the technology is mature enough and the associated risks of its use are low.
Such assessments should include the use of prompts, which is a key factor in generative AI. Toh notes that similar questions can be framed differently by different users. He says businesses risk tarnishing their brand should a chatbot system decide to respond correspondingly to an aggressive customer.
Countries, such as Singapore, have put out frameworks to guide businesses across any sector in their AI adoption, with the main objective of creating a trustworthy ecosystem, Toh says. He adds that these frameworks should include principles that organizations can easily adopt.
In a recent written parliamentary reply on AI regulatory frameworks, Singapore's Ministry of Communications and Information pointed to the need for "responsible" development and deployment. It said this approach would ensure a trusted and safe environment within which AI benefits can be reaped.
Also: This new AI system can read minds accurately about half the time
The ministry said it rolled out several tools to drive this approach, including a test toolkit known as AI Verify to assess responsible deployment of AI and the Model AI Governance Framework, which covers key ethical and governance issues in the deployment of AI applications. The ministry said organizations such as DBS Bank, Microsoft, HSBC, and Visa have adopted the governance framework.
The Personal Data Protection Commission, which oversees Singapore's Personal Data Protection Act, is also working on advisory guidelines for the use of personal data in AI systems. These guidelines will be released under the Act within the year, according to the ministry.
It will also continue to monitor AI developments and review the country's regulatory approach, as well as its effectiveness to "uphold trust and safety".
Mind your own AI use
For now, while the landscape continues to evolve, both individuals and businesses should be mindful of the use of AI tools.
Organizations will need adequate processes in place to mitigate the risks, while the general public should better understand the technology and gain familiarity with it. Every new technology has its own nuances, Toh says.
Baker & McKenzie does not allow the use of ChatGPT on its network due to concerns about client confidentiality. While personal identifiable information (PII) can be scrapped before the data is fed to an AI training model, there still are questions about whether the underlying case details used in a machine-learning or generative AI platform can be queried and extracted. These uncertainties meant prohibiting its use was necessary to safeguard sensitive data.
Also: How to use ChatGPT to write code
The law firm, however, is keen to explore the general use of AI to better support its lawyers' work. An AI learning unit within the firm is working on research into potential initiatives and how AI can be applied within the workforce, Toh says.
Asked how consumers should ensure their data is safe with businesses as AI adoption grows, he says there is usually legal recourse in cases of infringement, but notes that it's more important that individuals focus on how they curate their digital engagement.
Consumers should choose trusted brands that invest in being responsible for their customer data and its use in AI deployments. Pointing to Singapore's AI framework, Toh says that its core principles revolve around transparency and explainability, which are critical to establishing consumer trust in the products they use.
The public's ability to manage their own risks will probably be essential, especially as laws struggle to catch up with the pace of technology.
Also: Generative AI can make some workers a lot more productive, according to this study
AI, for instance, is accelerating at "warp speed" without proper regulation, notes Cyrus Vance Jr., a partner at Baker McKenzie's North America litigation and government enforcement practice, as well as global investigations, compliance, and ethics practice. He highlights the need for public safety to move along with the development of the technology.
"We didn't regulate tech in the 1990s and [we're] still not regulating today," Vance says, citing ChatGPT and AI as the latest examples.
The increased interest in ChatGPT has triggered tensions in the EU and UK, particularly from a privacy perspective, says Paul Glass, Baker & McKenzie's head of cybersecurity in the UK and part of the law firm's data protection team.
The EU and UK are debating currently how the technology should be regulated, whether new laws are needed or if existing ones should be expanded, Glass says.
Also: These experts are racing to protect AI from hackers
He also points to other associated risks, including copyright infringements and cyber risks, where ChatGPT has already been used to create malware.
Countries, such as China and the US, are also assessing and seeking public feedback on legislations governing the use of AI. The Chinese government last month released a new draft regulation that it said was necessary to ensure the safe development of generative AI technologies, including ChatGPT.
Just this week, Geoffrey Hinton — often called the 'Godfather of AI' — said he left his role at Google so he could discuss more freely the risks of the technology he himself helped to develop. Hinton had designed machine-learning algorithms and contributed to neural network research.
Elaborating on his concerns about AI, Hinton told BBC: "Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."
As generative AI models continue to evolve, a new field area has arisen called prompt engineering which is the skill to add appropriate prompts to guarantee that the machine comprehends human language precisely as intended by the user. This has led to numerous courses, tools, and employment opportunities for prompt engineering skills.
So if you are thinking about upskilling, then choose any of the following courses for the best resources that are available.
Read more: Killing Prompt Engineering
DeepLearning.AI
Andrew Ng’s DeepLearning.AI has collaborated with OpenAI to launch a new course: ‘ChatGPT Prompt Engineering for Developers’. The free-of-cost course will let you learn how to use a large language model (LLM) effectively to build new and powerful applications. Isabella Fulford, a member of the technical staff at OpenAI, and Ng will teach a new course about how LLMs work. The course aims to offer valuable tips for prompt engineering, as well as demonstrate the various ways that LLM APIs can be used in applications for summarisation, inference, text transformation, and expansion. Furthermore, the course will cover two essential principles for crafting successful prompts, provide instruction on how to systematically develop effective prompts, and teach you to create a personalised chatbot. The short 1.5 hours course is beginner-friendly and, is designed to be accessible to novices with a fundamental grasp of Python. However, it is also helpful for experienced machine learning specialists who aspire to explore the forefront of rapid engineering and use LLMs.
Check out the course here.
Learn Prompting 101
The ‘Learn Prompting 101’ by Towards AI caters to beginners and offers a wide range of topics from fundamental AI concepts to advanced prompt engineering techniques. It’s a free and open-source course, that provides comprehensive guidance without overwhelming technical terms. The course is practical, featuring examples that are easy to comprehend and encourages collaborative learning. It consists of several chapters that include basic applications, intermediate, and advanced applications, reliability, images, prompt injection, tooling, prompt tuning, and miscellaneous topics. The course is highly respected and referenced by reputable organizations such as Wikipedia, O’Reilly, Scale AI, and OpenAI.
Enrol for the course here.
Prompt Engineering: Getting Future Ready
Priced at Rs 449, the ‘Prompt Engineering: Getting Future Ready’ is one of the best sellers on Udemy and is designed for beginners and covers a wide range of topics related to prompt engineering. It includes more than 1000 prompts, templates, and resources, and focuses on the primary tools utilised in prompt engineering, such as ChatGPT, Stable Diffusion, and Midjourney. Participants will learn how to effectively use each of these tools, and gain a comprehensive understanding of the differences between text-to-text and image-to-image generation. The course features a comprehensive prompt guide with practical examples and hands-on exercises to help learners create images and text that are almost indistinguishable from real life. The program is suitable for learners with varying levels of experience, including beginners, experienced AI practitioners, and professionals looking to incorporate prompt engineering into their work. Participants will acquire skills in areas such as content creation, SEO techniques, AI-generated art, startup building, email marketing, social media campaigns, and designing colouring books. No prior programming expertise is required, but participants must have a functional computer and an OpenAI account.
Click here to learn more about it.
Read more: Prompts are Next Big Thing in AI-Generated Art
Prompt Engineering for ChatGPT
The ‘Prompt Engineering for ChatGPT’ course on Coursera aims to train individuals to become proficient in using large language models, such as ChatGPT. However, the effectiveness of these models largely depends on the quality of the prompts provided by the user, and the course will equip students with the necessary skills to create effective prompts. The course can be accessed by individuals with basic computer skills, and it covers a wide range of prompts, ranging from basic to advanced, to enable students to tackle problems in any domain. Upon completion of the course, students will possess the ability to leverage large language models to carry out various tasks in their personal and professional lives. The course is free and comes with a certificate.
Check their website here.
Prompt Engineering+: Master Speaking To AI
The ‘Prompt Engineering: Master Speaking To AI’ is a free, short but comprehensive course on Udemy that teaches advanced techniques for prompt engineering. The course covers topics such as the anatomy of an engineered prompt, the prompt mindset, and advanced concepts like one-shot, few-shot, and zero-shot cot. Best practices and workflow optimization techniques are also included. The course is suitable for developers, designers, content creators, writers, bloggers, business owners, marketers, sales professionals, and students studying computer science, data science, or artificial intelligence. Anyone interested in AI and language models can benefit from this course.
Apply here.
The post Worried About AI Taking Over Your Job? These 5 Prompt Engineering Courses Will Keep You Ahead of the Game! appeared first on Analytics India Magazine.
Numerical Python or NumPy is a popular library for scientific computing in Python. The NumPy library has a huge collection of built-in functionality to create n-dimensional arrays and perform computations on them.
If you’re interested in data science, computational linear algebra and related fields, learning how to compute vector and matrix norms can be helpful. And this tutorial will teach you how to do that using functions from NumPy’s linalg module.
To code along, you should have Python and NumPy installed in your development environment. For the f-strings in the print() statements to work without errors, you need to have Python 3.8 or a later version installed.
Let’s begin!
What is a Norm?
In this discussion, we’ll first look at vector norms. We’ll get to matrix norms later. Mathematically, a norm is a function (or a mapping) from an n-dimensional vector space to the set of real numbers:
Note: Norms are also defined on complex vector spaces C^n → R is a valid definition of norm, too. But we’ll restrict ourselves to the vector space of real numbers in this discussion.
Properties of Norms
For an n-dimensional vector x = (x1,x2,x3,…,xn), the norm of x, commonly denoted by ||x||, should satisfy the following properties:
||x|| is a non-negative quantity. For a vector x, the norm ||x|| is always greater than or equal to zero. And ||x|| is equal to zero if and only if the vector x is the vector of all zeros.
For two vectors x = (x1,x2,x3,…,xn) and y = (y1,y2,y3,…,yn), their norms ||x|| and ||y|| should satisfy the triangle inequality: ||x + y|| <= ||x|| + ||y||.
In addition, all norms satisfy ||αx|| = |α| ||x|| for a scalar α.
Common Vector Norms: L1, L2, and L∞ Norms
In general, the Lp norm (or p-norm) of an n-dimensional vector x = (x1,x2,x3,…,xn) for p >= 0 is given by:
Let’s take a look at the common vector norms, namely, the L1, L2 and L∞ norms.
L1 Norm
The L1 norm is equal to the sum of the absolute values of elements in the vector:
L2 Norm
Substituting p =2 in the general Lp norm equation, we get the following expression for the L2 norm of a vector:
L∞ norm
For a given vector x, the L∞ norm is the maximum of the absolute values of the elements of x:
It’s fairly straightforward to verify that all of these norms satisfy the properties of norms listed earlier.
How to Compute Vector Norms in NumPy
The linalg module in NumPy has functions that we can use to compute norms.
Before we begin, let’s initialize a vector:
import numpy as np vector = np.arange(1,7) print(vector)
Output >> [1 2 3 4 5 6]
L2 Norm in NumPy
Let’s import the linalg module from NumPy:
from numpy import linalg
The norm() function to compute both matrix and vector norms. This function takes in a required parameter – the vector or matrix for which we need to compute the norm. In addition, it takes in the following optional parameters:
ord that decides the order of the norm computed, and
axis that specifies the axis along which the norm is to be computed.
When we don’t specify the ord in the function call, the norm() function computes the L2 norm by default:
As you may have guessed, negative L∞ norm returns the minimum element (in the absolute sense) in the vector:
Output >> neg_inf_norm = 1.0
A Note on L0 Norm
The L0 norm gives the number of non-zero elements in the vector. Technically, it is not a norm. Rather it’s a pseudo norm given that it violates the property ||αx|| = |α| ||x||. This is because the number of non-zero elements remains the same even if the vector is multiplied by a scalar.
To get the number of non-zero elements in a vector, set ord to 0:
So far we have seen how to compute vector norms. Just the way you can think of vector norms as mappings from an n-dimensional vector space onto the set of real numbers, matrix norms are a mapping from an m x n matrix space to the set of real numbers. Mathematically, you can represent this as:
Common matrix norms include the Frobenius and nuclear norms.
Frobenius Norm
For an m x n matrix A with m rows and n columns, the Frobenius norm is given by:
Nuclear Norm
Singular value decomposition or SVD is a matrix factorization technique used in applications such as topic modeling, image compression, and collaborative filtering.
SVD factorizes an input matrix into a matrix of a matrix of left singular vectors (U), a matrix of singular values (S), and a matrix of right singular vectors (V_T). And the nuclear norm is the largest singular value of the matrix.
How to Compute Matrix Norms in NumPy
To continue our discussion on computing matrix norms in NumPy, let’s reshape vector to a 2 x 3 matrix:
matrix = vector.reshape(2,3) print(matrix)
Output >> [[1 2 3] [4 5 6]]
Matrix Frobenius Norm in NumPy
If you do not specify the ord parameter, the norm() function, by default, calculates the Frobenius norm.
We generally do not compute L1 and L2 norms on matrices, but NumPy lets you compute norms of any ord on matrices (2D-arrays) and other multi-dimensional arrays.
Let’s see how to compute the L1 norm of a matrix along a specific axis – along the rows and columns.
Similarly, we can set axis = 1.
axis = 0 denotes the rows of a matrix. If you set axis = 0, the L1 norm of the matrix is calculated across the rows (or along the columns), as shown:
I suggest that you play around with the ord and axis parameters on and try with different matrices until you get the hang of it.
Conclusion
I hope you now understand how to compute vector and matrix norms using NumPy. It’s important, however, to note that the Frobenius and nuclear norms are defined only for matrices. So if you try computing these norms for vectors or two dimensional arrays and are not defined for factors and or multi-dimensional arrays with more than two dimensions, you’ll run into errors. That's all for this tutorial! Bala Priya C is a technical writer who enjoys creating long-form content. Her areas of interest include math, programming, and data science. She shares her learning with the developer community by authoring tutorials, how-to guides, and more.
More On This Topic
Using Numpy's argmax()
The Rise of Vector Data
Support Vector Machines: An Intuitive Approach
Support Vector Machine for Hand Written Alphabet Recognition in R
Qdrant: Open-Source Vector Search Engine with Managed Cloud Platform