TL;DR: ChatGPT has taken the professional world by storm. Learn how to best utilize this tool in The Complete ChatGPT Artificial Intelligence OpenAI Training Bundle, now just $29.99.
OpenAI’s ChatGPT has set the professional and academic worlds on fire over the past year. The artificial intelligence utility makes it easier to scale content and marketing production, perform more detailed research, and even craft internal and external communications in a matter of minutes. However, like any tool, there are best practices for getting the best results from ChatGPT. You’ll learn these best practices and more in The Complete ChatGPT Artificial Intelligence OpenAI Training Bundle.
You don’t have to be an AI or machine learning expert to use ChatGPT, which is exactly why it’s so popular. In this four-course bundle, you’ll get an introduction to ChatGPT and even learn how to build your own chatbot.
Starting out, you’ll get a brief intro to ChatGPT from expert Mike Wheeler (4.5/5-star instructor rating). You’ll learn how to use ChatGPT and write effective prompts to elicit the best information, ideas, marketing copy, research and more. Then, you’ll develop a more business-centric understanding of ChatGPT in a course led by Alex Genadinik (4.4/5-star instructor rating). You’ll be able to create hundreds of blog posts, write sales copy, ideate content for your blog, and even start and scale an entire copywriting business leaning on ChatGPT.
In the final two courses, John Elder (4.4/5-star instructor rating) will teach you how to create your own chatbots from scratch using Python, Tkinter, Django and other tools to amplify your business’s use of AI and scale operations at the same time.
Use this course kit to become an expert user of ChatGPT. Right now and for a limited time, you can get The Complete ChatGPT Artificial Intelligence OpenAI Training Bundle for just $29.99.
Prices and availability are subject to change.
Daily Tech Insider Newsletter
Stay up to date on the latest in technology with Daily Tech Insider. We bring you news on industry-leading companies, products, and people, as well as highlighted articles, downloads, and top resources. You’ll receive primers on hot tech topics that will help you stay ahead of the game.
For over a century, Tesco has been a leading player in the retail industry, continuously adapting and leveraging emerging technologies to stay ahead of the curve. The company has over 4,700 stores and employs roughly 3.5 lakh people worldwide. It’s only expanding.
In a recent interaction with AIM, Tesco Business Solutions director of enterprise analytics, Venkat Raghavan disclosed how it is leveraging Generative AI and other technologies to enhance customer experience, predict demand, analyse customer behaviour, prevent fraudulent activities and more.
“One area where generative AI can play a big role is in extracting insights out of a lot of reports and dashboards that we build,” he said.
Besides, Tesco is also exploring how Generative AI can be used on the customer service side. The retailer gets a lot of queries from customers and a major chunk of them gets answered through Whatsapp. Hence, Tesco is exploring if the interaction could be made more engaging with generative AI for better customer experience.
Building volatile forecasting models
There’s more. Demand forecasting plays a crucial role in the success of retailers. However, if we go back 10-15 years, the classical approach for forecasting was time series.
“Time series says if you are retailing for 20 years, year over year, you will see a certain pattern on trend of sales, certain seasonal impact,” said Raghavan.
Further, he said with COVID coming into the picture, most retailers have realised that sales is a volatile thing.“But time series believes that history will repeat in some form, either as a trend or as a cyclic or seasonal effect. So the big realisation that we had three years ago was the fact that this philosophy of forecasting is probably not going to work anymore,” said Raghavan, saying that it has now shifted to volatile models.
In this line, Tesco has built sophisticated algorithms to accurately predict future customer demand, which helps the retailer optimise its buying, distribution, price, and promotions.
Consumer behavioural insights
Understanding customer preferences and behaviour helps Tesco optimise its supply chain, improve inventory management, and develop targeted marketing campaigns.
The retailer gathers information on customer purchases and their behaviour in stores to understand better what they want, need, and deserve from their products. This data is collected through channels like the Tesco Clubcard scheme, Tesco Mobile App, and customer feedback channels.
Based on these insights, Tesco has created personalised promotions for customers based on their shopping behaviours.
“What has helped us is our strong capability to derive insights from our stores, primary research, contact centre feedback, and external data partners to view a connected customer journey and make improvements where possible,” shared Raghavan.
Besides, a deep understanding of its customers also helps Tesco offer personalised rewards and promotions that will have the highest relevance.
Understand Fraudulent Customer behaviour
Fraud in retail can happen in many places. While it’s a broad topic, Raghavan points out two critical aspects- fraud that happens at customers’ end and fraud that happens at the employees’ end.
A fraud happens, for example, when somebody buys goods from Tesco using credit card details purchased from the dark web.
“While it’s very difficult to understand fraudulent credit cards, it’s relatively easier to understand fraudulent customer behaviour,” he said.
Further, by analysing geofencing data in combination with the types of orders being placed, it is possible to identify patterns in fraudulent orders.
“My team’s job is to understand how they think. Can I run a real time algorithm to catch the thinking? One of the thinking aspects is understanding that they will want to increase value per unit.
“That’s one of the variables. Besides, there are a lot of other variables and we put all of this into a real time model which is trying to detect fraudulent activities, and on a monthly and yearly basis, we have stopped huge numbers of fraudulent transactions in real time,” Raghavan said.
Unleashing Tesco’s Tech Prowess
“Throughout our history, we have constantly explored ways to infuse emerging technologies to improve how we operate our business,” said Raghavan.
“Some prime examples are our foray into online retailing in the 1990s, much ahead of most retailers, and our Clubcard programme launched over two decades ago, one of the first retail loyalty programmes in the retail industry,” said Raghavan.
In the present age, Tesco is leveraging emerging technologies to continue being a dominant force. “We have taken a data-driven approach to every decision we make, from customer engagement to supply chain management. This has been a key factor in our success and is now one of our most valuable assets,” concluded Raghavan.
The post How This Company is Redefining Retail Experience with Generative AI appeared first on Analytics India Magazine.
One might imagine, since OpenAI has been in the news so much since last year, it must be mining money like crazy with a rising user base. While it might be true that Sam Altman might be sitting on top of a cash pile, most of that is just because he has been rich by investing in startups with Y Combinators, does not care about money, and the billions of dollars that Microsoft has poured into the startup. Because in reality, the company is going through a loss.
According to a report from the Information, OpenAI’s losses have doubled to $540 million since it started developing ChatGPT and similar products. Yes! The product that is being embraced by the world is making the tech-giant like Google dance, but is falling behind profitability.
It’s a conundrum. Since the company started offering their services through APIs and products, more people and companies started adopting and integrating them in their offerings. It might feel like this would generate profit.
However, there is a flip side to it. As the demand for the generative AI products by OpenAI increases among enterprises, the company has to keep developing better products, and that requires a lot of compute, and the company has to invest it in that. Balance is the key here.
But for now, according to the same report, though the revenues are picking up, the pace of spending hundreds of millions of dollars to build better products and the infrastructure to support them, is also picking up.
Reflecting on this draining capital, Altman has privately suggested that he wants to raise as much as $100 billion in the coming years. This big amount might be to keep up with the going on losses, but Altman said that this is just to achieve the long dream of the company of artificial general intelligence. Many experts, however, believe that this goal is never going to happen, especially with auto-regressive LLMs.
Basically, Altman and his OpenAI wants to establish a monopoly on AI. The plans are already evident. After buying the domain AI.com, and redirecting it to ChatGPT, OpenAI has now also filed for a trademark on ‘GPT’. We called this – the beginning of the end of OpenAI.
All In the Name of OpenAI
But what about the fact that so many VCs and investors are pouring money into generative AI startups these days amid the OpenAI’s ChatGPT buzz? If a company like OpenAI, leading the AI race is still not profitable, how come smaller companies with smaller funds develop AI models and become profitable?
Yann LeCun in his Tweet probably answered this. He said, “Every economist I know says that it takes 15-20 years before a general purpose technology has a measurable effect on productivity.” Same is the case for return of investments in tech startups.
Speaking to several VCs who are investing in tech startups in India, we found that a lot of them realise that the companies are not going to be profitable immediately, i.e. even four to five years. “We don’t want to exit quickly, we are here for the long haul,” said Arjun Rao from Speciale Invest. Maybe the investors know what they are doing.
But a question arises, amid all the talks about regulating AI with the US President meeting the top tech giants at the White House for controlling the effects of AI, would the VCs want to pour funds into startups that are also forced to regulate their AI? There is no doubt that VCs are in it for the money mostly. And funding AI startups, whose technologies are at the brisk of a lot of regulations, might be something they should be ready to fund as well.
OpenAI has Microsoft for that. But maybe the partnership is not feeling that well for the company at the moment. Recently, OpenAI announced that it is planning to release ChatGPT Business for companies to leverage their technology into their systems. Though the company was offering its API all this while, and Microsoft was using it to provide the AI technology to its customers, OpenAI’s move to offer its product directly to enterprises seems like a sly move from the company, circumventing Microsoft. Or maybe not, we will know more when it happens. Read: Why ChatGPT Business Might Fail?
Meanwhile, micromanager Altman also recently said that working from the office is a better model since people can collaborate and make better products. One might wonder what is going on at OpenAI’s office, or this might be another move to make sure there are no losses since the company has spent a lot of dollars for roping in researchers from competitors like Google DeepMind.
While OpenAI is going through losses, Microsoft reported a profit increase of 9%. There is probably no doubt that the losses for OpenAI might continue for a while, maybe the company doesn’t really care about profits at the moment. For now, OpenAI’s profits are just like ChatGPT, hallucinating!
The post Why OpenAI is Not Profitable Yet? appeared first on Analytics India Magazine.
HackerOne published the results of its new study, which reveals that half of the organizations surveyed experienced increased cybersecurity vulnerabilities in the last year as they faced security budget cuts and layoffs. HackerOne is the world’s biggest ethical hacker community.
TechRepublic attended a recent HackerOne event where executives from the company, as well as ethical hackers and leaders from GitLab and Sumo Logic, debated the economic impacts of cybersecurity. Experts at the event revealed the steps some companies are taking to do more with less, highlighting the critical role that DevSecOps, machine learning and artificial intelligence can play during the economic downturn.
Jump to:
Security budget cuts and layoffs without a plan are a serious mistake
Security trends: AI, ML, DevSecOps, bug bounties
In cybersecurity, flexibility is critical
Security budget cuts and layoffs without a plan are a serious mistake
HackerOne’s survey shows that economic reductions, such as budget cuts, layoffs and freezing new hires and investments, related to security are negatively impacting the ability to manage cybersecurity efficiently for 75% of the companies surveyed. However, reducing cybersecurity investments due to economic downturns can have devastating consequences in the long run for companies.
Cybercrime increases during recessions and crises, as the FBI reports for 2008 and the pandemic reveal, respectively. By 2023, the average cost of a data breach has risen to an all-time high of more than $5 million, Acronis says. Additionally, compliance risks are rising with the ever-evolving regulatory landscape.
“Whenever there are times of high anxiety, such as an economic downturn coming off of a pandemic, bad actors are at their best,” George Gerchow, chief security officer and senior vice president of IT at Sumo Logic, said during a roundtable at the HackerOne event.
“I’ve seen a few companies impacted by tightening of the budget strings, but I can tell you that at Sumo, it hasn’t happened. We’re probably investing more heavily than we ever have. I think it’s a real mistake when companies start cutting back on their budget around cybersecurity, especially during these times.”
SEE: Year-round IT budget template (TechRepublic Premium)
GitLab’s recent report reveals that 85% of security leaders surveyed say they have the same or less budget than in 2022.
“Organizations globally are seeking out ways to do more with less,” David DeSanto, chief product officer at GitLab, said.
Mark Loveless, staff security engineer at GitLab, explained that the company was affected by the economic slowdown and made adjustments, strengthening their focus on DevSecOps.
“We are using our software to write out software,” Loveless said.
“A lot of what we do is to try to speed things up and make things more efficient and that’s helped,” Loveless added.
Reflecting on whether budget cuts were a good plan, Loveless used a bank analogy.
“If you’re going to cut personnel of the bank, do you want to cut all the guards that are guarding the vault? Probably not.”
Ethical hackers and bug bounty hunters Herane Malhotra, a brand ambassador for HackerOne, and Joseph (who didn’t provide his last name) said that from their side, the impact has been low, as they are still very much engaging with many companies. Malhotra added that, driven by the challenging economy, many businesses are migrating online, and employees are accessing applications and companies’ infrastructure using public networks or other insecure means.
“There’s a need for cybersecurity to grow there,” Malhotra said.
The HackerOne report reveals that, although 84% of companies saw an increase in vulnerabilities and are concerned about financial and reputational damages from breaches, they still plan to, or have already, conducted layoffs and budget cuts that affect security teams.
In the last year, 39% of companies have made security headcount cuts, and 40% plan to make them in the next 12 months, according to the HackerOne survey. Gerchow explained that these actions have direct and indirect consequences, which are often overlooked.
Gerchow said that while many companies didn’t necessarily do layoffs, they have frozen headcounts despite having plans to increase the security departments due to workload demands. Security teams are then forced to take on the increased load and this, in turn, will affect performance and efficiency and can trigger burnout. Ethical hackers added that the lack of security staff could present an opportunity for bad actors to find new vulnerabilities in systems that are less guarded.
Security trends: AI, ML, DevSecOps, bug bounties
The economic landscape, budget cuts and layoffs are leading many in the cybersecurity industry to explore trends that include DevSecOps, artificial intelligence, machine learning, automation, bug bounty programs and consolidating security solutions.
DevSecOps
With DevSecOps, companies are realizing the strong connection between software development, security and operations, and incorporating security earlier in the software development lifecycle or shifting left. This strategy enables development, security and operations teams to work collaboratively instead of in silos.
GitLab’s survey reveals that this shift in DevSecOps is increasing, with 38% of security professionals reporting being part of a cross-functional team focused on security, up from 29% in 2022.
SEE: Top certifications for DevOps engineers (TechRepublic)
AI and ML
The GitLab survey also shows that leading businesses are turning to AI and ML to increase performance and efficiency in the software lifecycle.
AI and ML have become critical components of DevSecOps workflows. Sixty-five percent of developers are using AI-ML in testing efforts — or will be in the next three years — and 62% are using the tech to check code, according to GitLab’s survey.
This integration approach is far from being embraced by all companies and is leading to unnecessary costs. One-third of organizations admit they waste money due to inefficiencies in their tech stack and software development life cycle security process, the HackerOne survey reveals.
The number of cybersecurity companies offering AI and consolidation continues to rise. Some of the top recognized vendors and solutions include CrowdStrike’s Falcon Complete MDR, Tessian’s Advanced Threat Protection, Palo Alto Networks’ Cloud Security Automation and Darktrace’s PREVENT, DETECT & RESPOND and HEAL.
SEE: DevSecOps: AI is reshaping developer roles, but it’s not all smooth sailing (TechRepublic)
AI and ML enable companies to augment their resources, increase performance and strengthen security. Automation tools and consolidation also cut costs while freeing teams to focus on mission-critical responsibilities.
Leaders recognize that cybersecurity professionals, experts and ethical hackers are in high demand. Security teams are the ones discovering higher-risk vulnerabilities, responding, shutting down attacks and conducting investigations. They fill in the gaps that automation leaves behind and leverage innovative technology like AI as a tool and not a replacement.
Bug bounty programs and penetration testing
Another area where security experts are beginning to leverage AI and new technologies like ChatGPT is in bug bounty programs and penetration testing.
“The whole idea of running a bug bounty program helps immensely,” Gerchow said.
“Some companies don’t understand that the payoff isn’t immediate, but you’re coming out with safer code,” Gerchow added.
It’s also cheaper for companies to run bug bounty programs than to employ in-house security teams solely dedicated to finding weak points.
All experts at the HackerOne roundtable agreed that AI and tools like ChatGPT models are game changers, but they also recognized that the industry is only beginning to uncover their potential.
According to the HackerOne report, 37% of companies surveyed assure AI can be “somewhat relied upon.”
Consolidation of security solutions
The U.S. government and public sector are also being affected, with many respondents to GitLab’s survey saying they are deploying software slower or at the same rate as last year. Even at the federal, government, aerospace and defense levels, more than half want to strengthen and consolidate their toolchain.
Consolidation of security services and vendors is another tactic that appeals to companies looking to reduce budgets. For example, companies like Check Point Software Technologies, leveraging AI cloud-based threat intelligence and automation, recently introduced Infinity Global Services, an end-to-end solution.
“Customers are looking to consolidate and simplify their cybersecurity solutions,” Paul Solomon, Managed Cyber Services, Softcat, partner of Check Point, said.
In cybersecurity, flexibility is critical
In the cybersecurity industry, one thing is clear: Slashing your own security budget without a plan, or neglecting new tools and strategies like DevSecOps, AI, automation and bug bounty programs is a severe risk in 2023.
Cybersecurity Insider Newsletter
Strengthen your organization's IT security defenses by keeping abreast of the latest cybersecurity news, solutions, and best practices.
Google doesn’t fully understand how its AI chatbot Bard comes up with certain responses. This is what Google CEO Sundar Pichai said in an interview last month. With the rising adoption of LLMs across domains, the problems associated with its unpredictable and uncertain hallucinations has also increased. Malicious actors are manipulating the data on which AI chatbots are trained that are skewing the output of chatbots.
The recent rise of ‘data poisoning’ has aggravated the problem for scientists, in which malicious content is finding its way into training datasets. Data poisoning is an act of injecting false or misleading information into a dataset, with the intention of skewing output data. Dmitry Ustalov, Head of Ecosystem Development at Toloka, said that it becomes problematic when models are trained on open datasets – “Datasets can be changed to make them malicious.”
With machine learning models being trained on open datasets, access and manipulation of it can be done by introducing just small amounts of adversarial noise.
In a recent research paper on ‘Poisoning Language Models during Instruction Tuning’, researchers have found that by using only 100 poison examples, it is possible to manipulate arbitrary phases to consistently yield negative sentiment and disrupt outputs across hundreds of held-out tasks. The findings also indicate that large language models are more susceptible to poisoning attacks. Another worrying concern is that defenses based on data filtering or reducing model capacity offer only limited protection and reduce test accuracy.
Source: arxiv.org
Injecting manipulative data
Split View Poisoning is a type of data poisoning where an attacker takes control of a web resource that is indexed by specific datasets and injects it with biased or inaccurate data into it. For instance, images used in datasets are linked to urls which are hosted on domains that may have been expired. Malicious actors can buy these expired domains and change the image given in the url, which leads to malicious dataset that goes into training.
In Front-Running Poisoning type, an attacker selects a subset of inputs and modifies their labels or features to achieve a specific goal. The modified inputs are then inserted into the training data before the legitimate data, which causes the model to learn incorrect data.
Dmitry Ustalov, explains front-running poisoning with the example of the commonly used platform- Wikipedia.
Wikipedia data is widely used for training datasets. Though Wikipedia can be edited by anyone, their moderators can reject edits. However, if malicious actors can track the exact moment when wiki snapshots are taken (weekly/periodic backups), and edits are made just before them, then that particular edit will appear in the snapshot which will be erroneously used for training data.
Front running poisoning is difficult to detect and defend against as defenses such as data cleaning and outlier detection are ineffective if the poisoned inputs are selected to blend with rest of data.
An Explanation to Hallucinations?
When people are looking at ways to use chatbots and other AI-generated applications, the threat of malicious content is finding its way. Last month, OpenAI got into legal trouble when the Mayor of Hepburn Shire in Australia, threatened to sue the organization when ChatGPT erroneously claimed that the Mayor had served prison term for bribery. There have been multiple instances of hallucinations and biases that chatbots have displayed, and the possibility of data poisoning causing them cannot be ruled out.
The attack continues on different platforms. Recently, two journalists from major publications had fallen prey to malicious attacks on their reputations through data poisoning attacks. This happened on the “OnAForums” platform. Both the publications have issued retractions ever since and reporters are being wary.
According to an article by Guardian, the content used for training these models have come under scrutiny. Colossal Clean Crawled Corpus or C4 by Google is a valuable resource that is used for training and evaluating language models. Over 15m websites are sourced for C4. While reputable sources are used for training, there are a number of less reputable news sources that are also used for training- contributing to further malicious content.
Going by how loopholes in training data are majorly being twisted, and with multiple security problems through prompt injections, and data poisoning attacks, LLMs can never be foolproof. If input data itself is flawed, there is no redemption.
The post The Obscure Poison Slowly Destroying AI Chatbots appeared first on Analytics India Magazine.
A long time ago, in a galaxy far, far away, technology peaked with faster-than-light-speed travel and lightsabers. With the advancements today, the people are ready to journey into a futuristic world of spaceships, humanoid robots, and laser weapons. Today, some of the make-believe technology seen in “Star Wars: The Rise of Skywalker” and its predecessors is a lot closer to reality than you might think.
Dr. Jonathan Roberts, a professor of robotics at Queensland University of Technology in Australia and a self-described “Star Wars” mega-fan said, “We’ve constructed our entire built environment for humans, so by definition a human-size robot that can do what a human can do can use all the things we’ve already made — like door handles or flights of stairs,”
Star Wars has been a technological inspiration for decades. Among the most iconic elements of this storied franchise are its droids, ranging from the adorable R2-D2 and BB-8, to the snarky and quick-witted K2SO from Rogue One, to the formidable and deadly battle droids that loom large in the prequel trilogy. In reality, humanoids made the headlines last year after Elon Musk revealed ‘Optimus’ and so did other market players. Furthermore, robots like Boston Dynamics’ Atlas show incredible athleticism, and the field of robotics, in general, is progressing in leaps and bounds.
Furthermore, tech companies are striving to create “speeders,” similar to those featured in the films, that can hover above ground and swiftly transport individuals to their desired destination.
One of the pioneers is a California-based startup company Aerofex, which developed the Aero-X vehicle. Described as “a hovercraft that rides like a motorcycle,” this vehicle can fly at impressive speeds of up to 45 mph (72 km/h) while hovering up to 10 feet (3 meters) off the ground. Meanwhile, U.K.-based Malloy Aeronautics’ Hoverbike is projected to reach speeds of over 170 mph (274 km/h) at the same altitude as a helicopter. But since the company showed off its product in 2014 there has been no update since then.
For eco-conscious consumers, Bay Zoltan Nonprofit Ltd., a Hungarian state-owned applied research institute, developed an electric battery-powered tricopter called the Flike, providing a greener alternative to traditional gasoline-powered vehicles.
Death Star and C3PO
The beloved characters of Star Wars are no strangers to losing limbs in battle, with key figures sporting bionic prosthetics as a result of their harrowing experiences. One such device, referred to as “the Luke arm,” allows amputees to control their prosthesis using their mind, thanks to electrodes that connect the device directly to the nervous system. This technology was developed by DEKA Research and Development Corporation and funded by the Defense Advanced Research Projects Agency (DARPA), with the bionic arm taking its name from the iconic hero, Luke Skywalker.
(Greg Clark (right) and Jake George (left) with the LUKE arm. Photo credit: Dan Hixson/University of Utah College of Engineering.)
In 2012, the Obama administration turned down a public petition to build a Death Star battle station. It pointed out the numerous other ways the space industry is catching up to Star Wars. “We already have a giant, football field-sized International Space Station in orbit around the Earth that’s helping us learn how humans can live and thrive in space for long durations,” the White House responded.
Companies including SpaceX and Orbital Sciences Corporation are also building rockets for NASA with the hope of one day supplying a manned expedition to Mars.
The famous C3PO is fluent in over 6 million forms of communication.That sounds a lot more impressive than the 1000-or-so languages that Google’s Universal Speech Model works with. The 1,000 Languages Initiative to build a machine learning model that would support the world’s thousand most-spoken languages (maybe even Klingon someday!) for better inclusivity globally was launched in 2022.
The futuristic tech in “Star Wars” may still be a part of science fiction. Nevertheless, the line between sci-fi and real-world is becoming increasingly blurred. While many of these advancements remain in the nascent stages, every journey must begin with a single step. As the wise Yoda once remarked, “do or do not. There is no try.”
The post The Incredible Tech Advancements Inspired by Star Wars appeared first on Analytics India Magazine.
Salesforce recently announced plans to collaborate with Accenture to accelerate the deployment of generative AI for CRM. Together, the companies intend to establish an acceleration hub for generative AI that provides organisations with the technology and experience they need to scale Einstein GPT – Salesforce’s generative AI for CRM — helping to increase employee productivity and transform customer experiences.
“Generative AI has the potential to transform work and reinvent business, with large language models (LLMs) impacting up to 40% of all working hours across industries, changing the way companies interact with their customers and ushering in a new era of generative AI for everyone,” Emma McGuigan, Senior Managing Director and Enterprise & Industry Technology lead, Accenture, said.
“Companies need the infrastructure and knowledge to deploy this new technology to help address their unique business needs. By combining Accenture’s industry experience with Salesforce’s technology, we plan to be well positioned to help customers solve problems with generative AI faster and more effectively,” she added.
The partnership will see senior leaders from Salesforce and Accenture collaborate with customers to develop customised AI strategies using new accelerators for Einstein GPT, user interfaces, and process automation, which can help increase productivity and profit for their businesses.
Besides, they will help build industry-specific AI models for customers in the financial services, health, manufacturing, and public sector industries.
Further, they will develop learning resources that help teams develop and enhance their generative AI skills, fostering the next generation of talent within the Salesforce ecosystem. Specific areas of focus include programming for Einstein GPT, AI literacy, data science and analytics, and ethics and responsible AI.
The post Accenture and Salesforce Collaborate to Bring the Benefits of Generative AI to Customers appeared first on Analytics India Magazine.
Mark Cuban, the famous American businessman, said this in 2017 – ““Artificial Intelligence, deep learning, machine learning — whatever you’re doing if you don’t understand it — learn it. Because otherwise you’re going to be a dinosaur within 3 years.”
Join our AI Community on Discord >>
Cuban is back with another prediction. The highest paying college major in the world, computer science, will hold very little value for employers in the future. Why? Because of AI. “Twenty years from now, if you are a coder, you might be out of a job,” Cuban said in an interview on the Recode Decode podcast with Kara Swisher. “Because it’s just math, and so, whatever we’re defining the AI to do, someone’s got to know the topic.”
Was I right or wrong 4 years ago ? https://t.co/HzCrRyJWXD
— Mark Cuban (@mcuban) May 2, 2023
There is no doubt that what he predicted back in 2017 is increasingly coming true. The increasing capabilities of AI are definitely making a lot of jobs obsolete, not just the ones that require coding.
On Monday, in an interesting turn of events, Hollywood’s film and several TV writers from the Writers Guild of America (WGA) commenced a protest with the disagreement over AI getting involved for writing scripts, essentially to limit the use of AI in writing scripts. Interesting to see how the writers who have been writing about “machines taking over the world” all these years, are the ones who are getting affected now. We don’t know who planted the idea first.
In support of the writers, it is true that models like ChatGPT or Bard are essentially trained on the scripts written by them, that is why they are able to generate the same output. The WGA’s lead negotiator, Ellen Stutzman, also highlighted during the protests that some of their members refer to AI as “plagiarism machines.” The struggle between proper attribution and copyright about generative AI technology is ongoing, and needs resolution soon.
Comedy writer Adam Conover told TechCrunch, “Our proposal is that we not be required to adapt something that’s output by AI, and that the output of an AI not be considered writers’ work.”
“If you’re using AI to emulate Shakespeare, somebody better knows Shakespeare,” said Cuban in the same interview, which definitely makes sense in this context. He further adds that people from coding majors who are graduating right now might have better short-term opportunities than someone who is an expert at Shakespeare.
Training Humans Like AI
But on the flip side, people on twitter argue that the need for computer science degrees will increase as the need to understand these systems would increase. Though there might be not enough need for plain technical jobs that the AI can perform through several tools, understanding the design behind the AI systems still would continue.
A little contrary to Cuban’s point, Yann LeCun, the chief of AI at Meta, tweeted about how every economist in the world says that any general purpose technology takes at least 15-2o years to have an effect on productivity. The delay is determined by how fast people learn and adapt to it. “So no, AI is not going to cause instant mass unemployment.” LeCun concluded. “It is only going to displace jobs over time and make people more productive,” just like any other technological revolution.
Every economist I know says that it takes 15 to 20 years before a new general purpose technology has a measurable effect on productivity. The delay is determined by how fast people learn to use it. So no, AI is not going to cause instant mass unemployment. It's going to… https://t.co/Hzry4Dy5dz
— Yann LeCun (@ylecun) May 2, 2023
This is similar to Sam Altman’s idea of “AI Humans”. In the podcast with Lex Fridman, the founder of OpenAI and ChatGPT said he wants to “make humans super great.” That happens when we move over the phase of denial, and come to acceptance, and eventually adoption of the technology. Essentially, making AI our teammates.
On similar lines, Richard Baldwin, at a panel at 2023 World Economic Forum’s Growth Summit said, “AI won’t take your job, It’s somebody using AI that will take your job.”
It might be too early to tell. But the fact that humans need to be trained just like AI models to be unbiased, more capable, and be proficient in them is the need of the hour. Maybe LeCun’s prediction is true, but the 15-20 years margin looks like a long time.
There is no doubt that the current AI technologies are creating a disruption in the market. Be it businesses, enterprises, or even the employees within the companies, everyone is increasingly scared about the rapid AI developments – just look at the layoffs.
Most recently, Geoffery Hinton, the godfather of AI also made an exit from Google and expressed his views about the risk of AI. In an interview with Times, talking about the risks of AI, Hinton also expressed concerns about how AI has the potential to eliminate jobs. While on the flip side, Satya Nadella said that AI has the possibility to create more jobs.
Adding to all this, companies are replacing people for the jobs that AI can do. IBM recently announced that it will stop hiring for 7,800 roles that can be replaced by AI. There is clearly a visible interruption that these new AI models are creating throughout the world. Who knows what the future holds for jobs – will AI add more jobs or replace them? For now, the layoffs are on.
Currently, the case in every field and sector is just like the layoffs in the big-tech, and now the roles of writers in Hollywood are at stake. Maybe instead of putting away AI altogether, writers should adopt the idea of AI into their workings. Probably, we will see better scripted movies in the future.
The post An Entire Generation is Studying for Jobs that Won’t Exist appeared first on Analytics India Magazine.
Whether unlocking your phone through face recognition or telling Alexa to play a song, artificial intelligence has filtered into our everyday lives. Now, you can harness the power of AI to do your writing, too. At your command, AI chatbots can write that paper you have been dreading to start, write code, compose emails, or even pass your MBA exam.
Also: ChatGPT productivity hacks: Five ways to use chatbots to make your life easier
Although ChatGPT has made quite the buzz, its popularity has made it unreliable for everyday use since it's often at capacity. The good thing is there are plenty of AI chatbots that are just as capable, and available whenever you need them.
We put together a list of the best AI chatbots and AI writers on the market and detailed everything you need to know before choosing your next writing assistant.
VIEW MORE
Ryter — Best AI chatbot for professionals
Rytr is an AI chatbot designed for professionals looking to streamline their writing process.
View at Rytr
VIEW MORE
Ink — Best AI chatbot for SEO
INK is an AI chatbot that's specifically designed to help content creators optimize their content for search engines.
View at Inkforall
VIEW MORE
Perplexity — Best AI chatbot for information seekers
Perplexity is an AI chatbot that's designed to help you find information quickly and easily.
View at Perplexity
Read more about the best tools for your business and the right tools when building your business!
Bing users have had more than a half a billion chats in the three months since Microsoft added GPT-4-powered chat to its search engine. Now Microsoft has done away with the waitlist for Bing Chat. You still need to use the Edge browser and sign in with a Microsoft account, but there’s no need to be approved for the service and anyone can start using it right away.
Jump to:
Bing’s upgraded chat features include images
Better Edge sidebar
Edge Actions
Keep beta expectations for this preview
Bing’s upgraded chat features include images
So far, the information Bing Chat returns has been text, sometimes formatted into a list or table, or a snapshot of a standard Bing image or video search. Soon the chat results will include images and videos (including images you ask Bing Image Creator to make for you), or charts and graphs for numeric information.
Another feature Microsoft will include in future releases is what Microsoft calls a “multi-modal” chat where you can upload or paste in images and use those to look up information. The image features are similar to the tools currently in Bing’s image search. There’s no timeline for the release of new features.
Better Edge sidebar
The upgraded Bing Chat will store the history of previous chats in the Edge sidebar. With these you can go back and look at the answers again, or ask more questions in the same context (Figure A).
Figure A
Bing chat will persist in the Edge sidebar so you can see why it suggested a web page or ask more questions. Image: Microsoft
You can already use the sidebar to start a chat about anything instead of navigating back to the Bing site. The sidebar can also search for information related to a web page you’re looking at, like getting an explanation for a complex topic. For example, if there’s a sentence that confuses you, you can copy it to the chat sidebar and:
Ask Bing to explain it.
Type in a question about something mentioned on the page.
Get a general definition.
Get context on how the sentence is relevant to this specific document.
Microsoft says about a quarter of chats start in the sidebar already. Microsoft is looking at how to use the context of a previous conversation with Bing Chat to make a new conversation more relevant to you, by knowing what you’ve been interested in before, but that’s a longer term plan. Soon this will be available for the smartphone version of Edge, where it may be more useful for long web pages that you need to get the gist of quickly on a small screen.
Currently, when you ask Bing Chat a question, you can only have about five “turns” before the tool stops responding. These controls make it less likely to “hallucinate” results that don’t exist or start responding to the tone of what you type.
If you start with a chat in Bing, you will be able to move that to the sidebar when you click through to the web pages it uses as sources so you can refer back to it while you’re looking at them: helpful if you want to see the full details – or check if chat has got it right, because the pages Bing Chat claims as sources don’t always says what it suggests they do.
Better Insights
The Insights tab in the sidebar already gives you different kinds of summaries for the page: questions and answers that might explain a complex subject, key points and links to searches about the main topics, plus related links and news stories, plus analytics about the web site itself – so you can see if it’s popular or a niche site that might not be as trustworthy. Microsoft says Bing Chat will get better at summarising large documents such as PDFs and long web pages.
Share in Compose
If you’re using the Compose option in the sidebar to get a first draft of a document you’re writing, you’ll soon be able to share it with colleagues or export it with more of the formatting from the chat, so you don’t have to recreate that in Word or Outlook when you want to use it (Figure C). You can use options in the Edge sidebar to tell Bing you want to change the length and tone of what it’s generated, or to format it as a paragraph, a list of ideas, a blog post or an email.
Figure C
Sharing or exporting your chats and chat history will let you or colleagues use what you learn from Bing chat. Image: Microsoft
Edge Actions
Actions in Edge turn Bing Chat into more of a Cortana-style assistant: If you want to stream a movie, a search will show you where it’s available. But rather than leaving you to click on the link you want, Bing Chat will be able to open the film on the site you choose (Figure D). If you ask for restaurant recommendations, Actions will be able to bring up OpenTable ready to book a table, and let you confirm the booking in the sidebar rather than on the page.
Figure D
Edge Actions will let you use Bing chat as an assistant to watch a movie or book a restaurant. Image: Microsoft
That will be more useful when it works with more services. Microsoft is opening Bing Chat up to developers through OpenAI so they can build the kind of plugins that are already available for ChatGPT to let it connect to specific information sources or use different services to format the information it shows you. For example, a WolframAlpha action will let you get more complex visualisations of the information in a chat reply than a simple chart; this will let Bing Chat answer more complicated science and maths questions and use data that’s been checked by experts in its replies. OpenTable and WolframAlpha are also on the list of ChatGPT plugins, as are Expedia, Instacart, Kayak, Klarna and Zapier: as OpenAI partners, it makes sense we might see some of them work with Bing, too.
We’ll likely see more details on how developers can work with Bing Chat and Edge Actions at Microsoft’s Build developer conference later this month. Microsoft says the updates to Edge with the new Bing Chat features will be available “in the coming weeks” for Windows 10, Windows 11, macOS, iOS and Android. The next beta channel release of Edge is due next week around May 9, but we don’t know if the new Bing Chat options will appear in a canary release of Edge first.
Keep beta expectations for this preview
Bing Chat remains a preview service that Microsoft is still experimenting with and will keep tweaking. You also need to keep checking the replies that you get from Bing Chat, because while it uses information on web pages to improve accuracy and stay up to date, the underlying GPT-4 large language model works by predicting the kind of answer you want to see to your question: that can include information you’re likely to want to see, even if it’s not actually on the web because it’s not true.
Microsoft Weekly Newsletter
Be your company's Microsoft insider by reading these Windows and Office tips, tricks, and cheat sheets.