DevSecOps: AI is reshaping developer roles, but it’s not all smooth sailing

DevSecOps Software development cycle programming concept.
Image: Murrstock/Adobe Stock

New DevSecOps research by GitLab suggests that 65% of developers are using artificial intelligence and machine learning in their code testing efforts or plan to do so within the next three years, signaling a potentially significant shift towards the automation of software development processes.

GitLab’s seventh annual Global DevSecOps Report surveyed more than 5,000 IT leaders, CISOs and developers across the financial services, automotive, healthcare, telecommunications and tech industries. The goal of the survey, which was conducted by market research agency Savanta in March 2023, was to understand the successes, challenges and priorities for DevSecOps implementation.

Jump to:

  • A growing reliance on AI and ML
  • Challenges for developers and security pros
  • A trend in “shifting left”
  • The most important skills for security pros
  • Worries about how AI/ML will impact jobs
  • How leaders can empower DevSecOps

A growing reliance on AI and ML

Among the key findings in GitLab’s report was the fact that AI/ML adoption in software development and security workflows continues to accelerate, with 62% of software developers using AI/ML to check code — up from 51% in 2022 — while 53% are using bots in the testing process, compared to 39% last year.

GitLab’s report found that organizations were beginning to incorporate security into the software development life cycle earlier, with AI/ML playing a critical role in identifying vulnerabilities in code. Developers who used a DevSecOps platform were more likely to have implemented automation and AI/ML for testing than those who had not, the research found.

Challenges for developers and security pros

Toolchain complexity

Developers and security professionals continue to face challenges juggling the various tools and applications they are expected to use as part of their role. Toolchain management is an issue for security professionals in particular.

GitLab found that 57% of security respondents reported using six or more tools, compared to 48% of developers and 50% of operations professionals.

Not only that, but security professionals’ toolchains appear to be expanding. In GitLab’s 2022 Global DevSecOps Report, 54% of security respondents said they used two to five tools in their workflow, while 35% reported using six to 10; in 2023, these figures were 42% and 43%, respectively.

Consistent security monitoring

Predictably, the plethora of tools security professionals are expected to use makes maintaining consistent monitoring more challenging, with 26% of security professionals identifying this as an issue. Likewise, 26% of security respondents reported difficulty in drawing cohesive insights from all integrated tools, with two-thirds (66%) saying they wanted to consolidate their toolchains as a result.

The study indicated a growing awareness of security as a shared responsibility among DevSecOps teams, with 71% of security professionals surveyed reporting that developers were capturing a quarter or more of all security vulnerabilities — up from 53% in 2022.

A trend in “shifting left”

The report highlighted a shift toward cross-functional collaboration, with 38% of security professionals reporting being part of a team focused on security, compared to 29% in 2022.

According to GitLab, this trend reflects the industry’s move toward incorporating security earlier in the software development lifecycle, known as “shifting left.” This approach enables development, security and operations teams to work together more efficiently, rather than operating in silos.

With 85% of security respondents reporting the same or lower budgets than in 2022, tech teams are having to stretch their dollars further than ever.

SEE: Why shifting left is at top of the agenda for DevSecOps

In the press release about the report, David DeSanto, chief product officer at GitLab, said DevSecOps tools and methodologies could enable organizations to achieve better security and efficiency by consolidating toolchains and reducing costs, ultimately freeing up development teams to focus on mission-critical responsibilities and novel solutions.

“Organizations globally are seeking out ways to do more with less. This means that efficiency and security cannot be mutually exclusive when identifying opportunities to remain competitive,” said DeSanto.

“GitLab’s research shows that DevSecOps tools and methodologies allow leadership to better secure and consolidate their disparate, fragmented toolchains and reduce spend, while also freeing up development teams to spend time on mission-critical responsibilities and innovative solutions.”

SEE: Security teams aren’t the only ones struggling to do more with less.

The most important skills for security pros

As AI and ML become a more integral part of the software development lifecycle, organizations will need to ensure security teams are equipped with the right skills and tools to take full advantage of new technologies. However, GitLab found that AI and ML are competing with other high-impact areas as security professionals shuffle their professional goals.

SEE: Learn about the different DevOps careers and career paths

In 2022, security professionals identified AI/ML as the most important skill for furthering their careers — more so than both developers and operations professionals.

This year, while nearly a quarter (23%) of security professionals chose AI/ML as top skills, they placed more importance on soft skills (31%), subject matter expertise (30%) and metrics and quantitative insights (27%) — suggesting that professionals recognize the need for a well-rounded skill set to navigate modern security challenges.

Worries about how AI/ML will impact jobs

There is some resistance to the accelerating adoption of AI and ML in the software development cycle, which leaders will need to navigate carefully.

Much like in other industries, GitLab’s survey found that tech professionals worry about what AI/ML mean for their jobs: Two-thirds (67%) of security respondents said they were concerned about the impact of AI/ML capabilities on their role, with 28% saying they were “very” or “extremely” concerned.

Of those respondents who expressed concern, 25% said they were worried that AI/ML could introduce errors that would make their job more difficult. Meanwhile, 29% worried that AI/ML would reduce the number of available jobs, and 23% expressed concern that AI/ML would make their skills obsolete.

How leaders can empower DevSecOps

Invest in AI/ML training and tools

Organizations should prioritize equipping their security teams with the necessary skills and tools to effectively leverage AI and ML in their software development and security workflows, maximizing the benefits of automation and improving efficiency.

Promote cross-functional collaboration

Encourage a shifting left approach by fostering collaboration among development, security and operations teams, leading to a more streamlined and efficient software development lifecycle that incorporates security from the ground up.

Consolidate and streamline toolchains

Security professionals are using multiple tools, leading to additional complexity. Focus on consolidating and simplifying toolchains to improve efficiency, reduce friction and costs and enable security teams to focus on their key responsibilities.

Developer Insider Newsletter

From the hottest programming languages to commentary on the Linux OS, get the developer and open source news and tips you need to know.

Delivered Tuesdays and Thursdays Sign up today

66% of Americans would not want AI to help make hiring decisions

ai, analysis, artificial intelligence, automation, big data, brain, business, cg, cloud computing, communication, computer graphics, concept, creative, cyber, deep learning, digital transformation, ed
Image: metamorworks/Adobe Stock

Americans are divided on whether artificial intelligence will hurt job-seekers or help them, according to a new Pew Research Center report. The survey of 11,004 people conducted in December 2022 covers a broad swath of questions on the topic of whether workers think AI might make the hiring process better or worse and how.

Jump to:

  • Respondents divided on whether AI will help or hurt workers
  • Many job applicants oppose AI in hiring
  • Opinions on employers using AI to track workers
  • How AI in hiring affects candidates’ decisions to apply
  • Some concern about racial bias when using AI for hiring
  • Awareness of AI in hiring still has room to grow

Respondents divided on whether AI will help or hurt workers

About a third of Americans (32%) think AI will equally harm and hurt workers generally, while another 32% think it will hurt, 13% think it will help and 22% are not sure about its potential effect.

The survey asked whether people were comfortable with employers using AI to “collect and analyze data, make decisions and complete tasks,” particularly in hiring. Although the survey did not specify the type of AI that might be used, the timing matches with the proliferation of generative AI such as ChatGPT.

Most survey respondents (62%) think AI will have a major impact on the American workforce in general. Another 21% said it would make a minor impact. A small portion predicted it would have no effect (2%) or are unsure (15%). Those numbers change a bit when workers are asked about AI’s impact on them as individuals specifically: 35% predicted AI would have only a minor impact on their jobs, 28% anticipated it would have a major impact, 19% said it would have no impact and another 19% are unsure.

SEE: Some hiring managers are shifting focus to skills, not degrees or specific experience.

Many job applicants oppose AI in hiring

The Pew Research study shows that most Americans surveyed (72%) are opposed to AI making final hiring decisions. Some are receptive to the idea of AI being involved in part of the hiring process, but not making the final decision — 41% say they oppose the idea of AI reviewing job applications, while 28% favor it and 30% are unsure.

People are also reluctant about applying to a job knowing an AI is helping to make hiring decisions, with 66% saying they would not want to even begin to apply in that case. Another 32% would still want to apply.

SEE: LinkedIn found 68% of people involved in hiring are optimistic about AI’s impact on recruiting.

Opinions on employers using AI to track workers

American adults are more split on whether they favor or oppose the use of AI to track workers’ productivity and handle other day-to-day management tasks. Out of all respondents, 47% oppose the idea of using AI analysis of worker performance to make decisions about who to promote, while 22% favor it.

The numbers are similarly split on other ways in which AI used to track productivity might reduce privacy. Of the survey respondents, 51% oppose using AI to track exactly what workers are doing on their work computers, while 27% favor it and 22% are unsure. A large majority (70%) oppose using AI to analyze employees’ facial expressions, the least popular use case posed by the survey.

How AI in hiring affects candidates’ decisions to apply

Another important aspect of the prospect of using AI in the hiring process is that job applicants want “the human factor.”

“What if I don’t have the ‘right’ keywords on my application?” one respondent, a woman in her 40s, wrote. “Would I be dismissed outright?”

In total, 66% said they would not want to apply to a workplace that uses artificial intelligence to help in hiring decisions; another 32% said they would still apply. The remaining percentage did not give an answer.

Some concern about racial bias when using AI for hiring

Another concern about AI hiring is that it would exacerbate bias in the hiring process. A majority (53%) of applicants who believe racial or ethnic bias are a problem in hiring believe AI would reduce that bias, while 13% say AI would be worse than humans at treating applicants equally. Others think it would be about equally biased as humans (32%). Out of the members of the surveyed group who are Black Americans, 47% said AI would reduce bias in hiring, and 20% say it would get worse, which is a stronger negative sentiment than the group overall. Thirty-two percent of surveyed Black Americans said they weren’t sure.

Awareness of AI in hiring still has room to grow

As of this late 2022 survey, a majority of Americans are not thinking about how AI could be used in the hiring process. According to Pew, 61% said they hadn’t heard about AI use in the hiring process before participating in the survey. Employers should be transparent about whether they are using AI in their hiring process and educate job seekers about what to expect from that process.

Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more.

Delivered Tuesdays and Fridays Sign up today

Salesforce announces Einstein GPT for field service

The Salesforce logo on an office building.
Image: Sundry Photography/Adobe Stock

Salesforce’s Einstein GPT and Data Cloud are now available in beta in the Field Service app, giving field service workers access to artificial intelligence features such as real-time data, automation and summarization.

Jump to:

  • Who is Einstein GPT for Field Service for?
  • Benefits of Einstein GPT for the Field Service app
  • Field service is just beginning the AI journey
  • How AI will change field service management
  • Data Cloud and Flex Worker Management enhancements

Who is Einstein GPT for Field Service for?

Salesforce says its addition of Einstein GPT into the Field Service app can help workers more efficiently communicate with their contact center, save time taking notes and generate service reports. Field service workers may be home nurses, technicians, contractors, workers in the public sector, in manufacturing, and more. For example, Einstein GPT can help home nurses automate the process of writing up their notes after a home visit.

Salesforce partnered with different LLM partners including OpenAI, Cohere and Anthropic, said Taksina Eammano, the executive vice president and general manager of Salesforce Field Service, in an interview with TechRepublic.

Benefits of Einstein GPT for the Field Service app

The AI will complement the Field Service app’s ability to manage field workers’ tasks, manage assets and equipment, schedule and optimize travel and improve the customer experience, Eammano said. The AI functionality is made with part-time contractors in mind, allowing contact centers to see when contractors who only do certain tasks or work limited hours are available.

“Service swarming”

Field Service Mobile with Einstein GPT will enable teams to “service swarm” customer issues and work orders in Slack; service swarming is a Salesforce support model in which a worker can bring team members from across the organization into a conversation. Field Service mobile users can also use pre-built solutions from Salesforce’s Component Library to build custom mobile experiences for tasks like finding nearby spare parts or managing timesheets.

SEE: Salesforce revealed the collaboration with OpenAI on Einstein GPT in March.

Efficiency features with Einstein GPT on the Field Service app

Einstein GPT on the Field Service app can be used for on-the-job training or other communication between workers. Einstein GPT offers pre-work summarization, which makes sure the technician is ready and well-informed about what the previous worker to visit that site encountered and did.

Eammano said this is part of an overall philosophy of extracting more value from each site visit. For instance, the AI-enabled app may recommend other products that the customer might be interested in based on the most recent job a technician performed for them.

“Why this is so interesting [is] we have heard from our customers we’re sitting on their sales and service and marketing and commerce data,” said Eammano. “To have a way to unify that, to have our customers trust us to be that, we’re very concerned about that.”

Tutorials and guides

Another way in which AI is being added to the Field Service app is that Einstein GPT enables workers to search for step-by-step guides or to find instructions tailored to their specific tasks. These might be based on public knowledge or internal information shared with Salesforce.

“We are also looking to trust the data is coming from your CRM data,” Eammano said. “The corpus of data is within your environment and is substitutive and additive to the public data. That might include weather, maps, data and external product knowledge.”

Field service is just beginning the AI journey

Eammano pointed out that field service is well-positioned to benefit from an AI product because many smaller companies are still working on getting up to speed with digitization.

“These companies are still going through their digital transformation,” Eammano said. “Operations is still catching up to be able to drive automation.”

Some companies that use Field Service today are asking, “What data do I want, and what data will I get?” Eammano said. She sees an opportunity to make those decisions at the same time as housing all of the data from the workforce in the same service — namely, Salesforce and its companion apps.

Field service operators could benefit from greater safety with AI, Salesforce claimed, with real-time monitoring enabling companies to be sure technicians get to work and back home as expected.

Eammano sees AI as augmenting, not replacing, jobs in field service. Some field service jobs may move closer to the contact center, she predicted. Looking further in the future, she sees customers being able to service their own equipment more often. Even further into the future, she predicts a blended “autonomous apprentice” model where human technicians train bots to augment their work.

How AI will change field service management

In a world where a technician team might be made up of both people and the bots they’ve trained, managers might look for different markers for success.

“One of the areas I’m very excited (about) is around thinking about what outcomes enterprise software really wants to measure,” Eammano said. “How about your service outcomes? How do you measure success? Today we have customer satisfaction/NPS, but the future is: how do tech providers demonstrate much more of that outcome and create that outcome well together?”

Eammano sees CRM, data and AI as the future of how enterprise work works — and how Salesforce can serve it.

Salesforce is working on unifying the Field Service mobile app and Salesforce mobile app’s functionalities so that some AI features, such as conversational layers, in-app summarization and content generation, can cross over between both.

Data Cloud and Flex Worker Management enhancements

In other Salesforce news, Data Cloud for Asset Service Management has been enhanced with real-time data and predictive, usage-based maintenance. Ideally, this will help field technicians and other workers who monitor heavy machinery or infrastructure prevent machinery from failing before it actually happens.

For managers, Flex Worker Management has been enhanced with AI that can analyze when and where it’s best to send field workers based on their skills, their distance from the work site and the available tools.

Person using a laptop computer.

Cloud Insider Newsletter

This is your go-to resource for the latest news and tips on the following topics and more, XaaS, AWS, Microsoft Azure, DevOps, virtualization, the hybrid cloud, and cloud security.

Delivered Mondays and Wednesdays Sign up today

DeepMind melds with Google’s Brain Team to accelerate AI growth

Google logo on the side of a building

Despite recent reports of Google hastily pushing AI developments in an attempt to remain competitive in the AI space, the tech giant announced another significant advancement in its AI projects on Friday.

Google is merging its Brain Team, a Google Research team that focuses on machine learning and AI, with DeepMind, a leading AI research company acquired by Alphabet in 2014, to create a new group called Google DeepMind.

Also: This new technology could blow away GPT-4 and everything like it

The group will be backed by Google's computational forces and will focus on further accelerating Google's development of AI products, services, and most importantly their safety.

"The pace of progress is now faster than ever before," says Google CEO Sundar Pichai. "To ensure the bold and responsible development of general AI, we're creating a unit that will help us build more capable systems more safely and responsibly."

The CEO of Deepmind, Demis Hassabis, will lead the development of Google's "most capable" and "responsible" AI systems, according to the release.

Jeff Dean, a co-founder of the Brain team and former SVP of Google Research and Health, will shift into the role of Google's Chief Scientist to both Google Research and Google Deepmind.

Google Research will continue to further research on important topics in computer science such as algorithms and theory, privacy and security, health, climate and sustainability, and responsible AI, according to the post.

Also: A team of ex-Apple employees wants to replace smartphones with this AI projector

A major theme of the release was the concept of responsible and safe AI, likely following all the backlash Google has received since launching Bard.

On Wednesday, both former and current employees shared with Bloomberg that Google's rush to release Bard caused the company to compromise AI ethics and safety. Perhaps the Google DeepMind group will be a step in the right direction towards addressing that issue.

People are turning to ChatGPT to troubleshoot their tech problems now

ChatGPT opened up on a phone

Just a year ago, if you had a question about anything IT-related, you would have likely turned to Google to find an answer. Now people are turning to ChatGPT to get conversational, AI-generated explanations instead.

Electric surveyed 1,000 people to find out where they are getting their IT advice from. A whopping 66% of respondents said they went to ChatGPT for help with their IT problems, according to the study.

Also: This new technology could blow away GPT-4 and everything like it

Some of the most popular IT questions in the U.S. include basic questions such as how to fix a frozen computer screen, slow internet, slow computer or an overheated phone.

Unsurprisingly, GenZ is the generation most likely to seek IT advice from ChatGPT at 83%, followed by Millennials at 67%. GenX and Baby boomers didn't trail too far behind at 50% and 48%.

So is getting your IT advice from an AI chatbot a good idea?

Electric asked 200 IT professionals to judge ChatGPT's responses on four of the most-searched tech questions in the United States to find out.

Also: ChatGPT or Google: Which gives the best answers?

ChatGPT's responses earned it an overall accuracy score of 39 out of 100 from IT experts. Considering ChatGPT has only been around since Nov. 2022, an overall 39% approval rate isn't too shabby.

Some of ChatGPT's answers even earned it high scores, for example, the response to "How do I make my internet faster?" got a whopping 74 accuracy score.

See also

85% of business leaders would let a robot make their decisions

Robot analyzing data

There is so much data out there today that it is causing much anxiety among office workers, with business leaders wishing robots can make decisions for them.

Some 74% in Asia-Pacific said the number of decisions they made each day had increased tenfold in the last three years, with 86% noting that the volume of data made decisions at work and in life more complicated. Another 89% said the inability to make decisions was creating a negative impact on their quality of life, revealed a survey by Oracle, in partnership with DKC Analytics.

Also: This new technology could blow away GPT-4 and everything like it

With so much data out there, 33% said they did not know which sources or data to trust and feel overwhelmed, while 71% had simply given up on making a decision. The study polled 14,000 employees and business leaders globally, including 4,500 from six Asia-Pacific markets: Singapore, Australia, South Korea, China, India, and Japan.

Amid the information overload, 92% said they changed the way they made decisions over the last three years, with 31% relying entirely on gut feel. Some 96% wanted help from existing data but believed they lacked the skills to interpret the information in meaningful ways.

Overwhelmed by the volume of data, 85% of business leaders would let a robot make their decisions and avoid the challenges posed.

Also: A team of ex-Apple employees wants to replace smartphones with this AI projector

Some 87% admitted to suffering from "decision distress", having regretted or feeling guilty about decisions they made in the past year. Another 73% said the lack of trust in data and the volume of data had stopped them from making any decision.

And while business leaders in the region recognize data is critical to their company's success, the majority feel they lack the right tools to harness it.

The study found that 74% said dashboards and charts they received did not always relate directly to decisions they had to make, with 77% describing most available data as helpful only for IT professionals or data scientists.

Some 47% said managing different data sources required additional resources to collect all the data, while 38% said it slowed down strategic decision-making. Another 31% having to manage different data sources created more opportunities for error.

Also: Future ChatGPT versions could replace the majority of work people do today, says Ben Goertzel

However, 97% believed the right data and insight could help them make better HR (human resources) decisions, while 95% and 93% said likewise for supply chain and finance-related decisions, respectively.

Some 43% wanted data to help them make better decisions, while 37% would want it to reduce risk and 30% wanted data to plan for the unexpected.

90% believed access to the right type of decision intelligence could make or break their company's success.

Without data, 45% of respondents said their decisions would be less accurate, while 41% said they would be prone to error.

Also: These are the most in-demand tech roles in 2023

"As businesses expand to serve customers in new ways, the number of data inputs required to get the full picture expands, too," said Chris Chelliah, Oracle's Asia-Pacific Japan senior vice president of technology and customer strategy. "The hesitancy, distrust, and lack of understanding of data shown in this study align with what we hear from customers rethinking their approach to decision making."

Artificial Intelligence

How to use ChatGPT to write Excel formulas

ChatGPT and Excel

Figuring out how to write the right Excel formula to achieve the result you need can be a challenge, especially when you have a lot of data on a spreadsheet and need an intricate formula beyond calculating a sum. That is until ChatGPT arrived.

Also: I used ChatGPT to write the same routine in 12 top programming languages. Here's how it did

ChatGPT and other AI chatbots can easily help you create formulas for your Excel spreadsheet for free and without signing up for a specialized website, like ExcelFormulaBot. The great thing about using artificial intelligence like that in ChatGPT or Bing Chat to create formulas for Microsoft Excel (and Google Sheets) is that you can request formulas that are as simple or as complicated as you'd like — as long as you're crystal clear in your instructions.

How to use ChatGPT to write your Excel formulas

What you need: Using ChatGPT to write an Excel formula requires having access to Microsoft Excel or Google Sheets, as you can use formulas for both applications. You will also need an OpenAI account to access ChatGPT.

Keep in mind that, as intelligent as AI chatbots are, they're still not as well-versed in nuances as a person and can make errors or misinterpret prompts.

I didn't specify which row in column F we want to start the formula with, so ChatGPT defaulted to F2.

You can see the formula from ChatGPT highlighted on the left, with the result in the cell on the right.

Once you add the formulas to the rest of the column, the spreadsheet will be complete.

FAQs

Is there a way to use AI for formulas within Excel?

Ideally, you'd be able to easily generate formulas within Microsoft Excel as easily as you can with ChatGPT, by having an AI tool already built into the program. At this time, that's not the case, but Microsoft is rolling out Copilot, a set of AI tools for the Microsoft 365 suite of programs. Though Microsoft Copilot isn't widely available yet, one can hope the new features include a GPT-4-integrated Microsoft Excel.

Why is the formula from ChatGPT not working?

If ChatGPT gave you a formula that isn't working in Excel or Sheets, then it might've misunderstood your prompt. Go through the formula to see where the error could be on ChatGPT's end and see how you can reword your prompt to get the correct answer. In the example below, my prompt was "Create a formula for the total cost that adds the values in columns F, G, and H".

Since I didn't specify I needed the formula to add the values in cells F3, G3, and H3, the formula adds the values in the entire column range, which is not what we need.

I then corrected my prompt to request ChatGPT to "Create a formula for the total cost that adds the values in cells F3, G3, and H3".

Can ChatGPT write intricate Excel formulas?

ChatGPT is the perfect resource to write formulas for Excel or Google Sheets, whether they're simple or intricate. We used simple formulas for this example to walk you through the process, but you can ask the AI chatbot to write more complicated formulas and test its limits. Remember that the accuracy of ChatGPT's results depends a lot on how clear your prompts are.

Can you use Bing Chat to write Excel formulas?

Other AI chatbots, like Bing Chat and Google Bard, are also able to create Excel formulas for you by following the steps above.

How Bing Chat and Google Bard answered the same prompt I gave ChatGPT to write a formula that calculates sales tax.

See also

How to use ChatGPT to write an essay

ChatGPT opened up on a phone

ChatGPT's advanced capabilities have created a huge demand, with the AI tool accumulating over 100 million users within two months of launching. One of the biggest standout features has been its ability to compose all sorts of text within seconds, including songs, poems, bedtime stories, and essays.

Also: The best AI chatbots to try

Contrary to popular opinion, ChatGPT can do a lot more than just write an essay for you (which would be considered plagiarism). What is more useful is how it can help guide your writing process. We show you how to use ChatGPT to do both the writing and assisting, as well as some other helpful writing tips, below.

How ChatGPT can help you write an essay

If you are a looking for ways to use ChatGPT to support your writing, here are five different ways to explore.

It is also worth nothing before you get started that if you have access to Bing Chat, or even Google Bard, you can use the same tips below on those AI chatbots. Since Bing Chat and Google Bard are connected to the internet, they can both include current event sources and context.

Also: ChatGPT vs. Bing Chat: Which AI chatbot should you use?

If picking between both of those chatbots to write an essay, we recommend Bing Chat.

See also

AI can write your emails, reports, and essays. But can it express your emotions? Should it?

Robot hand typing on a keyboard

Artificial intelligence is getting smarter — or humans are making it seem so.

In recent months, generative AI has reached a point of innovation we've never seen before, opening up many ways for the technology to mimic human language processing. With advanced AI chatbots like ChatGPT doing everything from writing essays and Excel formulas to debugging code, it feels like the sky's the productivity limit.

Also: This new technology could blow away GPT-4 and everything like it

Generative AI has become so advanced that a bot's responses can uncannily mirror a human's cadence, syntax, and tone — though sometimes its diction contains a hint of apathetic directness. AI bots capable of advanced conversation can help humans find the words we don't know how to say, from getting started on a book report to expressing ourselves on heavier topics.

It's becoming clear that AI may be able to help many people communicate in more complex ways than before. Challenging times often call for us to find it within ourselves to deliver a message that can be uncomfortable, sad, shameful, or angering. But why feel those emotions when we write them down when we can have an AI chatbot do it for us?

Also: Do you like asking ChatGPT questions? You could get paid (a lot) for it

Imagine being a senior manager at a company, and it comes that time of the year when you have to tell several great employees that the company has decided to terminate their positions. You've created bonds with these employees and are genuinely sad to see them leave. Well, you could save yourself some time and struggle and ask ChatGPT to write the severance communications for you.

You'll find it will write respectful messages and imitate an empathetic tone. After a few tweaks here and there, you can make the words sound more like they came from you. And by not having to come up with these words yourself, you didn't have to really process them emotionally. Maybe you got to avoid thinking about the employee's family and how their life will be affected by an unexpected layoff.

Also: How to use ChatGPT to write an essay

Or, you've been planning a wedding for the last seven months, and it's been exceptionally stressful to keep up with work and personal endeavors on top of planning your nuptials. It's getting close to the day, and you still haven't written your wedding vows. There's an AI solution for that, too.

Joy, a digital wedding planning platform, can help you find the sentimental, joyful, nostalgic, or formal tone you need. Wedding vows are just words, right? You show your spouse daily how much you love them, and you're trying to plan the perfect wedding for your perfect person. Besides, you desperately need to test wedding cakes and spend time on something other than ruminating over your vows.

Here's my last scenario: Your good friend's parent passed away recently, and your friend asked you to write a few words to say at the funeral. You're also grieving because you grew up with this friend and were very close to their family as a kid. Why not let an AI chatbot write the memorial speech for you?

Also: How to use ChatGPT to write code

After all, writing about someone who passed away can be an excruciating process. And for AI-powered services like Empathy, which offers an AI trained to write obituaries, the argument is that families are already overloaded with funeral and estate planning while grappling with complicated emotions.

The business model is as follows: allow the AI to handle the words so that you can handle the rest.

That sounds sensible. But I can't help but wonder if the deceased's family, the laid-off employee, or your new spouse would feel uneasy knowing an AI chatbot wrote the heartfelt words being passed off as your own. Would it make you feel guilty to see someone you care about cry over the kind things you said, when in reality, the only words that were yours were the ones typed into the prompt?

Also: How to use ChatGPT to write Excel formulas

All of these scenarios beg this simple moral question: Is it a breach of someone's trust not to disclose that an AI language model penned an emotionally heavy message?

But that question brings up the other side of the coin. Sometimes, people struggle to express challenging emotions. And technology can now help us get those words out, regardless of our emotional capacity, reading and writing capabilities, or physical and intellectual abilities — and that's a fantastic thing.

Also: AI could automate 25% of all jobs. Here's which are most (and least) at risk

Perhaps using an AI chatbot to help you write an emotionally charged message can even help you understand your feelings. Seeing the chatbot's response might help you better articulate your complicated emotions the next time you need to do so. Or it can help you to see what parts of a situation you need to take responsibility for and how to try to change your communication in the future.

But if AI-written vows, eulogies, speeches, memoirs, love letters, apologies, poems, or songs become ubiquitous, are they still meaningful? Or, could they degrade the importance we put on the things our partners, families, friends, or coworkers write to or for us?

Also: How to use ChatGPT to build your resume

What do we lose when we outsource expressing our emotions to an AI chatbot? We've all heard that sitting on our emotions and feeling them is how we process them and get the intensity to pass. Speaking from the heart about a complex, heavy topic is one way we can feel true catharsis. AI can't do that processing for us.

There's a common theme during periods of technological innovation that technology is supposed to do the mundane, annoying, dangerous, or insufferable tasks that humans hate doing. Many of us would sometimes prefer to avoid emotional processing. But experiencing complex emotions is what makes us human. And it's one of the few things an AI model as advanced as ChatGPT can't do.

Also: ChatGPT's intelligence is zero, but it's a revolution in usefulness, says AI expert

If you think of expressing emotions as less of an experience and more of a task, it might seem clever to automate them. But you can't conquer human emotions by passing the unsavory parts of them to a language model. Emotions are critical to the human experience, and denying them their place within yourself can lead to unhealthy coping mechanisms and poor physical health.

AI chatbots are useful communication tools. They are extensively training on human language processing, but they're not friends or therapists. Complex human emotions will always be best understood and processed by human intelligence.

More on AI tools

ChatGPT’s ‘accomplishment engine’ is beating Google’s search engine, says AI ethicist

Google’s search dominance won’t be toppled easily. But new AI features in search engines like Microsoft’s Bing and startup competitor You.com rapidly reshape consumer expectations. As users seek more efficient search options, the search landscape is poised for a significant shift.

According to AI ethicist and You.com CEO Richard Socher, the search landscape has been influenced by several waves of Google alternatives and «hacks.» For example, millions of users add «site:reddit» to their search queries to find authentic opinions and experiences from Reddit rather than commercial content optimized for Google. And, the rise of TikTok as a search alternative among Gen Z users and the emergence of generative AI-like chatbots have also challenged traditional search methods by surfacing social video content that doesn’t originate from Google-owned YouTube. As Socher explained, Bing, You, and other search startups incorporated these features, offering users a more personalized search experience.

Also: ChatGPT’s intelligence is zero, but it’s a revolution in usefulness, says AI expert

Incorporating AI and chat in search is more than just a novelty, said Socher. It has the potential to transform search engines into «accomplishment engines,» helping users complete tasks more efficiently and avoid being inundated with low-quality content or ads.

«SEO-driven low-quality content has diluted the value of search results,» said Socher. «This has fed consumer demand for better search experiences.»

But can generative AI and user control really challenge Google’s hegemony in the search market? According to Socher, the key lies in innovation, user control, and strategic partnerships. He acknowledged that Google has built a significant moat around its business, from owning the operating system, Android and Chrome OS, and the browser, Chrome, to maintaining a $15 billion partnership with Apple to remain the default search engine on its devices. However, he also pointed out that entrenched habits can change, citing the shift from using «Skype» as a verb to newer communication platforms like Zoom.

Also: AI could automate 25% of all jobs. Here’s which are most (and least) at risk

Socher says Google needs help to adapt to new paradigms, such as generative AI, due to its existing business model and entrenched market dominance. Google’s huge margins from its ad-driven search engine make it difficult for the company to embrace innovative technologies that might disrupt the delicate balance of its revenue streams. As a result, Google faces the classic innovator’s dilemma, reluctant to fully commit to a new technology that could potentially cannibalize its core business. This hesitation creates an opening for competitors like You.com to leverage the latest AI advancements and offer users a more advanced and personalized search experience, thereby challenging Google’s incumbency.

He envisions a future where AI will dramatically improve search engines by integrating search, chat, and generative AI. This approach could create a more interactive and helpful user experience, moving beyond search engines’ traditional list of links. Socher believes that by harnessing generative AI, search engines can better understand users’ intentions and queries, providing more relevant and actionable results. These features could help users to accomplish tasks more efficiently, such as generating images and essays or even completing HTML websites directly within the search results.

Also: How ChatGPT works

Socher also highlights the importance of AI in driving innovation within the search engine market. He notes that You.com is often the first to introduce novel AI capabilities, which other search engines later adopt. This continuous innovation cycle attracts users who want to be at the cutting edge of AI technology and drives interest in platforms like You.com. By leveraging AI to deliver a more personalized and interactive search experience, Socher believes that search engines can effectively challenge the dominance of established players like Google and offer users a superior alternative for finding and accessing information online.

The discussion of generative AI inevitably leads to questions about the potential development of AGI, or artificial general intelligence. This term often refers to a superintelligent AI that could surpass human intelligence and pose an existential threat to humanity.

However, Socher takes a more pragmatic view, suggesting that the fears surrounding AGI are often overhyped. He argues that large language models like generative AI would need to possess the desire or capability to take on a mind of their own. Comparing the steam engine and the internet, Richard contends that while these technologies have surpassed human capabilities in specific areas, they have not posed an existential threat to humanity. «Are [AGI fears] actually realistic? I really don’t think so,» said Socher. Watch the full interview to hear his full explanation why.

See also