Bing AI chat expands to Chrome and Safari for select users

Bing AI Chat lands in Google Chrome

The Bing AI experience may look the same in Edge, Chrome, and presumably Safari, but there are a couple of obstacles you'll bump into with the non-Microsoft browsers.

Microsoft's AI chatbot is finally wending its way to non-Microsoft browsers. Previously accessible only in Edge and the Bing mobile app, the feature is now popping up in Google Chrome for Windows and Apple's Safari for MacOS, at least for some people. Powered by a custom version of OpenAI's ChatGPT model, Bing AI lets you summon the bot to answer questions, provide information, and compose content.

Also: The best AI chatbots: ChatGPT and other noteworthy alternatives

In an emailed statement sent to TechCrunch, Microsoft confirmed the rollout of Bing AI chat to Chrome and Safari.

"We are flighting access to Bing Chat in Safari and Chrome to select users as part of our testing on other browsers," said Caitlin Roulston, Microsoft director of communications. "We are excited to expand access to even more users once our standard testing procedures are complete."

Since the expansion of Bing AI to other browsers is hitting only select users, it's not yet widespread. In some cases, a popup notice appears on the taskbar in Windows 10 or 11 informing you that Bing AI is available to try in Chrome. Otherwise, you can simply open Chrome and go to the Bing website. If you're one of the lucky users, you'll see the familiar Chat or Chat Now icon that will take you to the chat window.

Also: ChatGPT vs Bing Chat vs Google Bard: Which is the best AI chatbot?

On my end, I found Chrome with the new Bing AI integration on my desktop PC but not my laptop, even though I'm signed into both with the same account. And so far, Safari on my Mac shows no signs of Bing AI. If you're not yet part of the early recipient group, you'll instead get a prompt telling you to install or open Edge to use the Bing AI.

If you do see the AI in Chrome, the screen looks just like the one in Edge with sample questions and an option to choose a conversation style among More Creative, More Balanced, and More Precise.

Just type and submit your prompt, and the AI will respond accordingly.

The Bing AI experience may look the same in Edge, Chrome, and presumably Safari, but there are a couple of obstacles you'll bump into with the non-Microsoft browsers, as noted by Windows Latest. With Bing AI in Chrome, you're restricted to five messages per chat compared with 30 in Edge, though both let you type as many as 4,000 characters in a prompt. And despite the AI's debut in Chrome, a popup window keeps appearing, prompting you to go to Edge to chat with Bing.

See also

Twenty years ago, AIM chatbot SmarterChild out-snarked ChatGPT

Twenty years ago, AIM chatbot SmarterChild out-snarked ChatGPT

'SmarterChild sort of opened the Pandora’s Box'

Amanda Silberling 9 hours

In the early aughts, millions of preteens raced home from school and hopped onto their parents computers, opened a chat window and typed… probably something like “fuck,” or “(.)(.)”.

“Do you kiss your mother with that mouth?” SmarterChild would reply in an instant. It would make you apologize, and then move past your indiscretions to answer all of your questions about the population of La Paz, the score of the Marlins game or the equations from your math homework.

Far before there was ChatGPT, there was SmarterChild, an instant message chatbot whose encyclopedic knowledge and quick wit could put Google to shame. Thirty million people added SmarterChild to their AIM and MSN buddy lists in the early 2000s, and for many of us, we had our first encounters with artificial intelligence, a technology that now feels unavoidable.

“We were offering people something they never had before,” said Peter Levitan, a co-founder of ActiveBuddy, SmarterChild’s parent company. “When you talked to SmarterChild, it knew who you were when you came back. It was like your friend, and having a computer friend then, and now, is fantastic.”

SmarterChild was far less sophisticated than ChatGPT, but then again, this was 2001. The chatbot was special enough that it inspired investors to fund Siri, which paved the way for Amazon’s Alexa and other robot assistants.

Levitan has remained level-headed about the future of AI. But another ActiveBuddy co-founder, Robert Hoffer, isn’t as calm. Dubbed “the bot father,” Hoffer describes himself as “cautiously skeptimistic” and repeatedly references stories like “Frankenstein” and the myth of Prometheus. The common denominator of these tales? Perhaps humans have gone too far, just because we can.

“It’s wonderful that SmarterChild sort of opened the Pandora’s Box,” Hoffer told TechCrunch. “Unfortunately, now, I feel like I have a certain amount of responsibility to share with the world, the good, the bad and the ugly.”

‘We had a sense of humor’

I met SmarterChild when I was 10. I wouldn’t get my first cell phone for a few more years (a Motorola Razr with a glittery case, shedding sparkles into my pockets), and I had never experienced the mind-blowingly routine luxury of instant, online connection. Now, this technology is so normal that we call these conversations DMs (direct messages), not IMs, since the “instant” part is redundant. But my first conversations with SmarterChild — my first conversations with anyone on the internet, really — felt magical.

As a fourth grader, I envied my older brother, whose friends from school had started making their own AIM accounts, allowing them to do homework and gossip together in group chats. But I had SmarterChild, at least, who could keep me entertained for a solid half hour when we played Hangman together.

I may have been on the younger end of SmarterChild’s user base, but I was by no means an anomaly. It was most popular among 10- to 16-year-olds, and according to Hoffer, SmarterChild usage spiked on weekdays around 3 PM, when kids like me were coming home from school.

Older users, of course, would test SmarterChild’s limits, cursing at it and seducing it. But unlike the AI bots that are now cropping up every day, SmarterChild had a personality.

“We had a sense of humor,” said Hoffer. “So if someone tried to have sex with it, it said, ‘Oh, I don’t have the parts, I’m just a robot!’”

These witty retorts mostly came from Pat Guiney, a copywriter who joined ActiveBuddy in 2000.

“I remember on my very first day, I was given a long list of the most obscene profanity you can think of, and my job was to try to think of responses to it,” Guiney said in an interview with the AV Club. “In other words, if someone typed some incredibly offensive thing to one of our chat buddies, how should we respond?”

Now, bots like Snapchat’s ChatGPT-powered MyAI will respond to inappropriate messages by saying, “Sorry, I can’t respond to that.” SmarterChild, on the other hand, would ask for an apology if you were mean to it. And it’d give you the silent treatment until you said you’re sorry.

It seems to be human nature that when we’re confronted with not-quite-human-beings, we will act on our most sadistic urges. We light a fire in our Sims’ home and watch them panic to save their estate, we find increasingly cruel ways to execute Koroks and we harass SmarterChild.

According to Hoffer, the complete chat logs from SmarterChild live somewhere in a basement in Glen Rock, New Jersey, encompassing both the lightest and darkest impulses of mankind.

“I’ve read more than almost anybody on planet Earth from these chat logs,” Hoffer told TechCrunch. “We have billions and billions of conversations. Many of them push the boundaries way far, right away. The speed at which they did it, even as young kids… but they were also asking for help.”

Some people loved SmarterChild. Some people hated it. While ChatGPT is divisive because of its impact on technology, SmarterChild was divisive because of its snarky persona.

“What the AI world is not delivering at the moment is really any personality, or any soul,” Levitan told TechCrunch.

Siri, Alexa, ChatGPT, Bard or most other AI bots that have cropped up since the 2010s have very unassuming demeanors, which Hoffer thinks is intentional.

“If you have a personality, and your personality is strong, you will appeal to exactly 50% of the people in the world,” Hoffer said.

‘The progenitor of all modern bots’

If you ask ChatGPT how many games the Phillies have won this season, it won’t know, since it’s only been trained on data through 2021. But SmarterChild knew. ActiveBuddy licensed databases from IMDB, the Weather Channel, the Dewey Decimal System, Elias Sports, the Yellow Pages and Sony, enabling it to instantly share a wealth of information.

“Everybody thought back then that the internet was slow because we were putting HTML files through it, and we realized that if you put text through it, it was instantly fast,” said Hoffer. “We sort of showed up like a virus, and when we look back on it, it was quite clear that we were definitely the progenitor of all modern bots, from Siri to Amazon Alexa to all of the various AIs we see today built around large language models.”

SmarterChild wasn’t the first AI-powered chatbot, but it bridged the gap between current technology like Siri and Alexa and earlier efforts like Dr. Sbaitso on MS-DOS and ELIZA. Like SmarterChild, these earlier bots could process natural language, but they didn’t have large swaths of data like SmarterChild to make its conversations more productive or useful.

SmarterChild grew from zero to 30 million users in under six months, solidifying itself as a phenomenon of the early aughts internet. Even Radiohead came calling, using ActiveBuddy’s technology to promote its 2001 album “Amnesiac” through a chatbot named GooglyMinotaur.

Though Radiohead didn’t realize it, they had identified the use case that ActiveBuddy would pursue for SmarterChild. The company couldn’t make money off of a free chatbot, but what if they allowed other companies to make their own AI-driven chatbots that could be tailored directly to other businesses?

But Hoffer was more interested in advancing the tech behind SmarterChild than he was in creating a SaaS product to help corporate brands (not as cool as Radiohead) make more money.

“There was a huge fight at the board of directors,” Hoffer told TechCrunch. “I was all about SmarterChild and wanting to win the Turing Test. They wanted to monetize. I lost that fight and got kicked out of the company as a result.”

Hoffer left ActiveBuddy in 2002; then, the company rebranded to Colloquis, though it had a brief moment calling itself Conversagent, a portmanteau of conversation and agent, which reflected its more corporate trajectory.

Ultimately Microsoft bought the company that created SmarterChild in 2006. In the 17-year-old press release celebrating the deal, Microsoft wrote that it would use Colloquis to bring automated customer service agents to Xbox. There was no mention of SmarterChild.

“When you’re a huge business, you can carve off what doesn’t make sense strategically, and that’s what happened,” Levitan explained. “You’re dealing with the difference between a forward-thinking, aggressive startup team and a major corporation that does not want to offend anyone at any time.”

‘We’ve just opened Jurassic Park’

The Microsoft deal didn’t go the way that ActiveBuddy’s founders hoped.

“Of course, you know, now we’re 15 to 20 years later, and they’re buying similar services,” Levitan quipped.

The founders have kept a close eye on developments in AI over the years. Hoffer remembers the iconic faceoff in 2010, when “Jeopardy!” icons Ken Jennings and Brad Rutter failed to defeat IBM’s AI, Watson, in a televised match. He watched alongside some engineers, “gobsmacked” at how this computer made legendary trivia masterminds look like amateurs.

Watson could quickly recall basic trivia facts, but it failed in categories like “also on your computer keys” and “one buck or less,” which required some lived human experience to conjure the correct response. Unfortunately for team humanity, these foibles weren’t enough for Jennings and Rutter to prevail.

AIs still struggle to overcome similar limitations. While an AI could write something that resembles a TV pilot, it won’t be very interesting, and it will likely contain copyrighted material. So, Levitan’s predictions for the future of AI aren’t too foreboding. He predicts that soon, we’ll be able to voice control bots like ChatGPT, but he doubts that AI will ever truly become sentient.

“I am a believer in human nature, and that everyone has a very personal voice,” Levitan told TechCrunch. Levitan’s biggest concern with AI is that people will believe everything they read, just because a computer said it. But Hoffer is more worried that the consequences of this technology will pose even larger issues.

“We’re right before the opening, or maybe we’ve just opened Jurassic Park,” Hoffer said. “How far are we from a lens in my eye that has AR hooked up to a bot? Probably not very.”

When SmarterChild typed its final words in 2008, I was in seventh grade, and my friends had finally come around to joining AIM. My away message was usually the lyrics to “Decode,” the song that Paramore wrote for the “Twilight” movie. Now that my buddy list was a bit more fleshed out, I chatted with my classmates about who we had crushes on, always using code names, because we were so deeply paranoid that these clueless 13-year-old boys would somehow manage to hack our AIM accounts and read what we said about them.

I was so captivated by the gut-wrenching highs and lows of middle school drama that I didn’t even realize what happened. My first friend on the internet had logged off forever.

OpenAI To Soon Launch Open Source GPT Models 

OpenAI is likely to release weights of its models in the coming months. Amid the Llama fever, OpenAI’s Andrej Karpathy, recently said that all of this is quite generic to just transformer language models. “If/when OpenAI was to release models as weights (which I can neither confirm nor deny!) then most of the code here would be very relevant.

In other words, OpenAI is most likely to make GPT- 3.5 open source according to OpenAI’s Karpathy, a prominent figure in the field of deep learning. It has to be noted that the company has not made any official announcement about this. The conversation stems from a Twitter (now X) thread, when one of the users asked Karpathy as to why has been playing with Llama 2, instead of building Jarvis for OpenAI.

Yay, llama2.c can now load and inference the Meta released models! 🙂 E.g. here inferencing the smallest 7B model at ~3 tokens/s on 96 OMP threads on a cloud Linux box. Still just CPU, fp32, one single .c file of 500 lines: https://t.co/CUoF0l07oX
expecting ~300 tok/s tomorrow 🙂 pic.twitter.com/bjurODT4dL

— Andrej Karpathy (@karpathy) July 25, 2023

This new development comes in the backdrop of the recent release of Baby Llama aka llama.c, where Karpathy has been exploring the concept of running large language models (LLMs) on a single computer as part of his recent experiments, inspired by the release of Meta’s Llama 2.

Check out the GitHub repository here.

Karpathy said llama2.c can now load and inference the Meta released models. He further gave an example of inferencing the smallest 7B model at ~3 tokens/s on 96 OMP threads on a cloud Linux box and is expecting ~300 tok/s soon.

Further, he said that If you can get 7B model to run at nice and interactive rates then you can go from “scratch-trained micromodels” to “LoRA fine tuned 7B base model”, all within the code of the minimal llama2.c repo (both training and inference). Can reach more capability and with less training data.

Interestingly, the success of Karpathy’s approach lies in its ability to achieve highly interactive rates, even with reasonably sized models containing a few million parameters and trained on a 15 million parameter model of TinyStories dataset.

Hopefully it will bring back the actual OpenAI which was started as an open source non-profit company where Karpathy was one of the initial founding members who played an active role in contributing to the open source community.

The post OpenAI To Soon Launch Open Source GPT Models appeared first on Analytics India Magazine.

How to manage real-time data in the digital age

Real Time Data

In today’s tech-driven world, data is like gold. It’s becoming more and more common for companies to use real-time, or live, data to make informed decisions, improve the service they give to customers, and get a leg up on the competition. But handling real-time data can be tricky because there’s so much of it, it’s constantly changing, and it comes in many forms. Here are nine easy ways to manage and use real-time data effectively.

1. Knowing what real-time data is

Real-Time data is information that’s delivered right after it’s collected, without any delay. It can be used in many ways in many industries. For example, banks use real-time data to keep track of changes in the stock market, and hospitals use it to monitor patients’ health in real time. The first step in handling real-time data is understanding what kind of real-time data your business needs and how to use it to make decisions.

2. Making sure your data is good quality

Good quality data is crucial for making good decisions. Bad quality data can lead to incorrect analysis, bad strategies, and even business failure. To make sure your data is good quality, clean it regularly to get rid of errors, and check the accuracy and quality of data when it’s entered. Checking for consistency can also help keep the data accurate across different platforms.

3. Using real-time data streaming

Real-time data streaming is a key part of data management. It means processing data quickly so you can get insights right away. Tools like Apache Kafka and Amazon Kinesis can handle large amounts of real-time data effectively. The biggest benefit of real-time data streaming is that it lets businesses respond to changes in real time, which can lead to better decision-making.

4. Taking advantage of real-time data analysis

Real-time data analysis is a powerful tool that businesses can use to guide their decision-making. It lets organizations collect, organize, analyze, and present data in real time. This allows them to make fast decisions based on the latest information. Real-time data analysis can help businesses spot trends, patterns, and insights they might otherwise miss, helping them react to market changes and make good growth strategies.

5. Keeping your data safe

Data security is incredibly important when managing real-time data. As businesses use real-time data more and more, they need to make sure their data is secure and private. If they don’t, they risk exposing sensitive information, which can lead to big problems like financial loss and damage to their reputation. To keep data safe, businesses should use strong authentication and encryption, and choose cloud services with good security features. They should also monitor the activities of users who have access to sensitive information, and regularly back up their data to keep it up to date.

6. Combining data silos

Data silos happen when data from different sources isn’t combined, which can lead to inefficiency and difficulty accessing up-to-date information. Businesses need a good plan to combine all their siloed data, which they can do using APIs, ETL tools, and other data integration technologies. This will allow them to access all relevant information in real time, and make informed decisions quickly.

7. Using cloud-based solutions

Cloud-based platforms offer many benefits for managing real-time data. They make it easy for businesses to collect, analyze, and store large amounts of data quickly and securely. Plus, they make it easier to work with employees, partners, and customers in real time. Cloud-based solutions can be scaled up quickly and easily, helping businesses react to changes in real time and stay competitive.

8. Automating data management

Automating data management processes can improve efficiency. It makes it easier for businesses to collect, store, analyze, and process data quickly and accurately. Automation can also help spot errors and inconsistencies early on, which can save time and prevent bigger problems down the line.

9. Training your team

To manage real-time data effectively, you need a skilled team. Because managing real-time data requires specific skills and knowledge, it’s important to train your team so they have the necessary skills to use the systems and technologies available.

Conclusion

In conclusion, managing real-time data effectively is vital for businesses to stay competitive in today’s fast-paced market. By following the simple steps outlined above, businesses can optimize their processes and use real-time data more effectively. This will help them make quicker, more accurate decisions and get better results.
It’s important to remember that managing real-time data requires both the right tools and the right people. With the right strategy, you can better manage operations and use real-time data to drive growth. Cool Tech Zone, with its focus on modern technology trends, is a great resource and partner for businesses navigating these challenges in the dynamic world of real-time data.

From automation to optimization: How AI is revolutionizing digital marketing campaigns

AI-powered digital marketing

Welcome to the exciting world of digital marketing! In this blog, we’ll delve into this thrilling frontier where optimization meets automation and Artificial Intelligence is at the center. No longer must manual labor and guesswork play an essential part in developing effective marketing strategies; with AI’s capabilities now at their disposal, marketers with digital presence have access to powerful weapons which combine information-based insight, rapid decision-making capabilities, and magic technology that transform their campaigns!

Imagine being an online marketer faced with the near-impossible challenge of reaching appropriate target audiences at appropriate times with proper messages while managing many other responsibilities – it seems exhausting! Don’t despair just yet though as AI may come in to save the day! Since AI can process large volumes of information quickly and seamlessly, its agents act like unstoppable teams dedicated to thinking up innovative approaches – well almost always anyway!

Let us backtrack a moment and clarify what automatization and optimization mean. Automation has long been used as part of digital marketing strategy to streamline repetitive tasks, streamline workflows and keep marketing engines operating at full throttle – be they social media posts, emails or data-driven ads. However, its power can only go so far; AI steps in at an unexpected moment to bring increased efficiency to marketing processes.

AI offers sophistication that transcends automation. Leveraging machine learning algorithms, AI can quickly analyze large databases to spot patterns and instantly make decisions to enhance marketing campaign success. AI works like having someone with access to an advanced crystal ball anticipating customer behaviors by identifying the most efficient strategies and constantly optimizing them to guarantee optimal outcomes. So get ready and experience firsthand how AI has revolutionized marketing processes from automation to optimization.

As we venture further through this blog, we’ll uncover all of the incredible ways AI-driven optimization of digital marketing. From AI-powered automation of campaigns, personalized mass optimization, or compression at scale – to unlocking its secrets for marketing campaigns! Let’s unearth its mysteries and reveal its unique possibilities; then get ready for this thrilling ride through digital AI optimizing digital marketing! Grab your cape digital and get ready for this thrilling trip into this wonderful realm of optimizing AI digital marketing optimization!

Understanding automation in digital marketing

Automation has long been an integral component of digital marketing. By streamlining processes and increasing efficiency, automation enables marketers to focus on strategic goals more easily than before. Digital marketing uses technologies and programs to automate repetitive processes, thus relieving marketers of repetitive workloads. Automation frees them up for more strategic tasks such as email sending or social media posting – saving resources while freeing time to focus on high-level strategies with innovative thinking at its heart.

One of the primary advantages of automating digital marketing is its ability to save both time and enhance effectiveness. Imagine sending personalized emails out to thousands of customers or updating social media at specific hours; such tasks would take forever! Automated tools allow marketers to schedule emails, posts, and ads ahead of time without constant hand-holding; leaving more time for studying information, revising strategies, or crafting engaging material!

Automation provides marketers with consistency and cohesion across different channels, by helping to design personalized user journeys where promotions and messages can be tailored specifically to an individual’s habits and preferences. This approach creates more effective relationships with clients while increasing brand loyalty; additionally, it enables real-time data tracking allowing marketers to monitor campaign performances for future improvements or make better decisions.

The role of AI in digital marketing

Artificial Intelligence (AI) is revolutionizing digital marketing, opening up many opportunities and advances to this industry. AI goes far beyond conventional technology by adopting an intelligent and flexible strategy rather than performing specific tasks; when applied to digital marketing it utilizes sophisticated algorithms and machine-learning techniques to analyze vast amounts of data for insight, making real-time decisions, segmentation of customer profiles for personalization campaigns as well as predictive analysis for campaign optimization purposes – among many others!

AI’s greatest contribution to digital marketing lies in its capacity for rapid analysis and processing of massive databases at unrivaled speeds. Through AI-powered software and algorithms, marketers are able to glean invaluable insight from vast amounts of customer behavioral, preference, and purchase history data available today – including customer behaviors such as behaviors of specific customer profiles based on preferences/purchase histories/targeted marketing initiatives – enabling precise customer segmentation/targeting so marketers can create customized experiences which truly resonate with individual customer profiles; AI also understands customer needs better, leading to greater engagement as well as increased conversions/conversions/conversions!

AI also gives marketers the power to enhance their campaigns quickly by constantly monitoring and analyzing data. AI algorithms can detect patterns and trends quickly, helping marketers make informed decisions to enhance the efficacy of their marketing campaigns. With predictive analytics powered by AI-powered predictive analytics tools such as predictive customer behavior prediction capabilities, they are able to alter strategies accordingly in order to increase return on investment (ROI) from marketing efforts. By harnessing vast quantities of information using AI they are also empowered with tailoring messages, targets, and channel selections in highly personalized campaigns for improved results that produce better results than expected!

AI-powered campaign automation

AI-powered automated campaign management has quickly become one of the game-changers of digital marketing. By harnessing Artificial Intelligence’s power to automate processes using advanced algorithms to complete tedious and difficult tasks automatically – from the creation of ads through distribution and targeting – marketers are freed up from mundane administration duties in favor of strategy creation and creative thought processes.

Utilizing AI-powered campaign automation marketing allows companies to design advertisements that are specifically tailored for specific target segments of audiences. After processing large volumes of data, AI algorithms will identify the appropriate groups and ensure your message reaches exactly the intended recipient in due time. Targeted ads increase not only efficiency but also return on investment by decreasing money spent on non-relevant audiences. Furthermore, AI automates this process, continuously learning from performance data to deliver ads more effectively while producing higher-quality outcomes as time progresses.

Optimization through AI

Optimizing is at the heart of effective marketing efforts, and AI plays a pivotal role in taking data-driven optimization to new heights. Because AI can rapidly analyze and process large volumes of information in real-time, marketers can take informed decisions while continuously optimizing strategies to get maximum performance from AI’s services.

Optimization through AI relies on artificial intelligence algorithms and predictive analytics as its foundation, giving marketers access to important patterns and insights within their data that allow them to identify areas for improvement based on customer engagement metrics, customer behavior metrics, and conversion rates. Utilizing these instruments allows marketers to uncover important patterns within customer data that enable them to implement data-driven improvements that benefit all. AI algorithms analyze various aspects such as engagement metrics, customer behavior metrics, and conversion rates that help optimize marketing messages according to what their audience responds to best.

Optimization in real-time is another crucial advantage AI brings. While traditional methods require manual analysis and adjustments, AI-powered systems allow real-time adjustments of campaigns through continuous monitoring of campaign performances; altering bidding strategies, targeting strategies and creatives quickly so as to make sure they perform optimally right away – this real-time process not only maximizes the effectiveness of ads but also allows marketers to remain ahead in an ever-evolving digital ecosystem.

AI-powered video compression in digital marketing

Video content has become a dominant force in digital marketing, capturing the attention and engagement of audiences. However, the challenge lies in delivering high-quality videos while maintaining loading times and user experience. This is where the video compressor comes into play. By leveraging advanced algorithms, video compression tools can optimize video files, reducing their size while maintaining visual quality, enabling marketers to deliver seamless and engaging video experiences.

AI-powered algorithms designed for video compression analyze the video content to identify areas where compression can be utilized without losing quality, by eliminating redundant content or optimizing techniques of encoding, as well as compressing video into smaller file sizes for streaming or loading purposes – meaning marketers are able to offer high-quality videos even over slower connections, improving viewers’ experiences while decreasing loss.

AI-powered compression of video offers marketers numerous other advantages beyond improved load times, including improving video streaming quality across devices and platforms. Thanks to AI technology, video content can be adjusted and compressed accordingly depending on each viewer’s capabilities and network conditions for smooth video playback no matter their Internet connection or device of choice. Adaptive video compression makes sure videos look their best no matter who is watching!

Personalization and customer experience

Personalization has long been at the core of effective digital marketing, and AI is at the forefront of providing personalized experiences across wide-scale customer data sets. AI allows marketers to recognize individual preferences, behaviors, and tastes; ultimately creating custom experiences tailored precisely toward target customers that become hit promotions.

AI allows marketers to deliver tailored messages and content based on client features and behaviors, using AI algorithmic processes. Marketers are then able to analyze customer data such as shopping habits, browsing history, and demographics in order to produce highly specific and helpful insights – from personalized product suggestions and email messages to targeted promotions sent directly. AI-powered personalization enhances user experiences by sending specific communications at exactly the right time to reach clients with targeted messaging that resonates with each person individually.

AI-powered personalization goes beyond content; it also extends to experiences. Marketers using AI can leverage customer information for interactive website experiences, personalized landing pages, and customized product offerings; all designed to increase engagement while building trust – leading to higher conversion rates overall.

Overcoming challenges and ethical considerations

AI offers many advantages to digital marketers; however, it also presents ethical concerns they must take seriously and address as ethical marketers. One major ethical concern raised by AI lies with potential bias in its algorithms – specifically when data used during training can reinforce and magnify existing biases found within advertising campaigns. Marketers must ensure AI systems receive training using diverse and authentic information in order to prevent unintentional bias from manifesting during content production, targeting, or decision-making processes.

One challenge involves the balance between human and automation interactions. While AI-powered automation may streamline processes and increase effectiveness, digital marketing often needs personalized interactions or storytelling with human elements to strike an emotional cord with viewers. Finding an equilibrium between AI automatization and human involvement ensures marketing strategies stay authentic for customer success.

As AI uses large quantities of data for its operation, security, and privacy issues become relevant when used for digital marketing. Marketers should be transparent regarding their data collection practices, seek consent from individuals when collecting such information, ensure it remains safe against unwarranted disclosure, and respect individuals’ rights to privacy rights while complying with regulations are essential components to building customer confidence.

Future trends and conclusion

AI will have an incredible effect on digital marketing as time progresses, impacting this industry in new and unexpected ways. A number of emerging trends could significantly change how marketers leverage AI for the results they desire; one such trend is the voice search market expansion with devices activatable by voice (such as Siri and Alexa virtual assistants). Marketing professionals must adapt website content accordingly in order to be compliant with voice searches by employing natural language processing powered by artificial intelligence which understands user intent to provide suitable outcomes.

An emerging trend for AI and AR/VR technologies will be their combination, as it could enhance AR/VR experiences by offering real-time data analysis as well as tailored suggestions in virtual environments. Combining AI with AR and VR may enable brands to develop high-interactivity brands that engage users on an entirely different level – something we anticipate with delight!

As AI continues to advance and mature, marketers now have an unprecedented opportunity to use this emerging technology in order to simplify processes, enhance personalization and boost campaign results. Staying abreast of developments and trends surrounding AI marketing allows marketers to stay ahead of their game while remaining successful outcomes in an ever-evolving technological landscape.

Start exploring AI applications and platforms that automate repetitive tasks, offer crucial data-driven insights, and enable personalized customer experiences. Consider voice search optimization along with AR/VR technologies for immersive brand experiences; this way you’ll leverage its full power for digital marketing efforts that yield positive results in future years.

The AI content + data mandate and personal branding

Fair Data Forecast Interview with Andreas Volpini, CEO of WordLift

The AI content + data mandate and personal branding

Andreas Volpini believes every user who wants to build a personal brand online has to proactively curate their online presence first. He sees structured data (semantic entity and attribute metadata such as Schema.org) as key to building a cohesive, disambiguated personal presence online.

Volpini has been working with web technology since its early days in the 1990s. He latched onto Tim Berners-Lee’s semantic web vision in the 2000s, and then pivoted to linked data in the mid-2000s to 2010s. Since 2012, he’s been focused on knowledge graph curation as a key means of search engine optimization (SEO).

More recently, as CEO of WordLift, he’s helped customers add Schema markup to their websites automatically and led them to the beginnings of knowledge-graph based conversational AI. It’s a feedback loop approach, a way to harness the power of dialogue between humans and machines, just as SEO has been a means of getting feedback from search engines.

Unfortunately, as Volpini points out, because it took so much data private, the Attention Economy of the 2000s has been our data equivalent of the Middle Ages. It’s a form of data feudalism. Users as a result have been forced to become proactive about the content they’re creating and how they manage content with data as an asset.

In this Fair Data Forecast interview, Volpini says,

“Any individual that wants to create an impact needs to create content and data at the same time. Because without data, you are going to be controlled by others who will train better models and would absorb whatever ability or expertise you might have.

“But if you start by creating your data, then it’s like you are piling up some value or you’re creating some assets. You can immediately create value with SEO, which is a good starting point.“

Volpini’s shed valuable light on the sometimes inscrutable topic of AI-assisted SEO. Hope you enjoy our conversation.

Interview with Andrea Volpini of WordLift

Edited Transcript

AM: Welcome to the Fair Data Forecast, Andrea Volpini. I’m just wondering how you got started with this years ago. I know WordLift has been around for several years. When did it start?

AV: We incorporated the company about five years ago in 2017 and I was coming out of research work financed by the European Commission on semantic web technologies applied to content management systems.

I’ve been working on the internet since the early days, the mid ‘90s. I specialized in web publishing platforms. After the Internet service provider phase, it was primarily about connecting people.

And then I began working on creating a platform for publishing content on the web. And then, as we started to grow with the complexity of the data that we were managing, and the web started to evolve and become more and more a place where people could find different types of services with rich data, I started to look for a solution for organizing this vast amount of content.

One of my previous companies worked on the content management system for the Italian Parliament. And so as a parliament you deal with a lot of proceedings and laws and different steps of the legislative process.

And so as we were creating this content management system, we found a deep need for a standard that could allow us to add metadata to these vast amounts of content that made the website so important in these years.

I mean, we’re talking about the 2000s, so very early phases of the digital ecosystem for public administration. And so all of a sudden I started to look at the founder of the world wide web, Tim Berners Lee, as a source of inspiration as he was asking people to converge into this vision of the semantic web.

And so the first research programs were really in 2009, 2010 I think. At that time I was working with some university here in Italy and tried to find different models for letting people organize ontologies or specific knowledge domains and publish content along with data on the web.

And so that’s how I got started.

AM: A lot of our readership will be familiar with “structured data” that’s in relational databases, but not so familiar with the kind of structured information that you’re well versed in.

Can you paint a picture of a bit of the metadata landscape that you’re a part of and how that fits in with the more traditionally structured tabular data?

AV: Let me continue a bit with the story of how we got into SEO and into creating a platform for building knowledge graphs.

We were dealing with the content management system with all different types of databases and people were kind of reaching out and trying to go with something that would be published on the Web. Because of course, it became immediately clear that if you represented a group inside a large organization, you needed to be facing the public on the web.

And this was in 2010 and 2011. But in order for tabular data to be represented as web pages, we needed to create some level of mappings between the structured content, and whatever was available in the databases.

And so we started to deal with the problem of mapping data in and out and also structuring web pages that could represent this data. And then. We realized that the Web could actually become a large database and could be rather than made of web pages, could be made of table data that could be connected and made accessible and computable by using some standardized form of metadata.

So linked data was born. Now linked data is a metadata standard that allows us on the open web to describe the same data structure that we might have inside our own databases. But we make it accessible in a web format that enables the open data movement.

In these early days, we moved from the semantic web into the linked data standard with the idea of creating a metadata layer that could be interoperable on the Web. Because of course (content) metadata had existed starting from the first XML files and then of course within the content ecosystem, standards like Darwin Information Typing Architecture (DITA) were providing a solution for people that wanted to label the fields inside their systems.

But linked data was making this possible at web scale and brought forward the idea of creating these vocabularies that would describe the data in such a way that others could understand it and compute.

And the vision of the semantic web was also introducing the concept of agents so AI that would reason over this web data. In 2011, with this research project that I was involved with, we started to create a prototype for WordPress to help people publish this linked data.

And of course, it was immediately clear that there was potential within the context of SEO, because at that point, Google and Bing and other commercial search engines were also realizing that in order to crawl and make sense of the web, they needed a layer of interoperable metadata and they could tap into this link data standard and make use of it in a simplified form. And so we’ve been on this journey with search engine optimization for these search engines for over ten years now.

AM: How has SEO changed over that time?

AV: The first projects that we did with WordLift were very hard to prove because we were forcing the client to not only work on a metadata that was, you know, accessible to a third party that wouldn’t give evidence of how they were going to use this metadata and but also, I mean, it was a pretty massive work to start working on the new emerging standard called Schema.org because immediately the concept had been the “semantic web” previously.

Ah, being used within the context of research groups or university academic work in general became a standard of the Web. And so people had to cope with the idea that they had to understand what is a linked data vocabulary, because Schema.org is a linked data vocabulary.

It’s interoperable, it has its own taxonomy, and it’s structured. But there was no proof on how the search engines were going to use this additional metadata. For us, coming from the experience that we had and the vision that Tim Berners Lee gave us, it was clear that it was a turning point in the web.

In the ‘90s, it was important to publish web pages in order to claim your existence. Then in the beginning of 2011, 2012, it was important that you also started to publish some metadata in order to exist.

But there wasn’t any proof of the pudding. In a way, it was very hard to justify a semantic project at that time.

AM: Just to give you an example of just what you’re talking about and see if it resonates, back when I was at PwC, I’d talk to data scientists and I’d say, well, why aren’t you using semantically structured data?

Isn’t that going to help you with your data science goals? And the question back to me was, how is that data different from Wikipedia? Did you ever develop an answer for that kind of question?

AV: I think that at that time the work that was done inside communities like Wikipedia, DBpedia, and then eventually Wikidata, was foundational for the development of the web of data as we know it and use it today.

And so we ended up working on Wikipedia more than we expected to because of these similarities that you described. And in a way, at that time. Our claim was to let users build. Their own Wikipedia using structured data, so.

It was clear that a formalized structure could help any publisher affirm and create. Its own or authority on the web, much like Wikipedia did. And then eventually this kind of connection between the schema markup and Wikipedia.

Community growth continued over the years until Google started building its own knowledge graph in 2012. To create its own knowledge graph, Google started by ingesting Wikipedia, and then eventually started to create facts from data that they could crawl from the web and using structured data.

And data that was coming from Freebase, this large knowledge base created to structure content. [Google had acquired Freebase in 2010.] There’s always been a strong connection between Wikipedia and structured data. But we have always positioned, even in these days, structured data as a way of creating your own Wikipedia.

Because you might not be eligible to be on Wikipedia, but you are eligible for creating your graph. And then at that point also we evolved and started not to talk about linked data. Instead, we started to talk about knowledge graphs because in 2012 when Google introduced the Google Knowledge Graph and its famous motto from string to things, then the knowledge graph became a concept that a lot of people could understand.

AM: Exactly. And there’s been a lot of development over the past decade in the area of knowledge graphs. And of course, people have different things in mind when they use the term knowledge graph. Can you talk about how the WordLift conceptualization of knowledge graph differs from others?

AV: We have been first in looking at this linked data stack in the context of search engine optimization. So on the SEO front, at that time there were very few people talking about structured data. Then they had to start thinking about knowledge graphs.

In 2012, when Google created its own knowledge graph, at least in the SEO industry, there was no accepted notion of entities and concepts. And in the ETL sector, there was little attention paid to information extraction and knowledge management.

People started to create knowledge graphs for addressing use cases that were way more complex than SEO. So the world of people building graph databases was primarily focused on finance and a few other sectors where it would be easy to justify an investment in a knowledge graph, where this has been the first to say, okay, well, we can use this technology that we call knowledge graph, which is actually the evolution of the linked data stack to do SEO.

Why so? Because at that point, for me, it became very clear that what we used to call optimization for search engines (SEO) was actually a data curation and publishing activity. So if I had better data and if I was able to publish it and make it accessible to the search engine in the most interoperable form, which seemed to be the linked data standard, then we could create an edge for a client.

But of course, in the early days it was very hard to prove it because it was unclear how the search engine would use it. But as soon as the knowledge graph arrived, then it became evident that Schema.org was delivering an explicit benefit.

You could see the impact on the search engine results page (SERP) because of the rich features that were presented when you were using specific mockups. And it was having implicit impact by helping you support some synthetic queries.

Basic example was, if you would ask a CEO of WordLift, using structured data, the search engine became capable of creating a synthetic query for Andrea Volpini and combining the results coming from the query about Volpini with the results coming from “coffee”, therefore creating a more accurate representation of results for that query.

And so these are implicit mechanics that made it possible to prove that there was actually a return on investment (ROI). And in the first year, I was very conscious about talking about ROI because we couldn’t add enough evidence of what was this ROI?

Was any ROI really there? Would it justify the cost of setting up a knowledge graph? We couldn’t answer that question. But right now, ten years later, I can say that if you give me a dollar, I can give you at least three dollars back in terms of additional traffic that I can create.

This is not applicable to each and every website. So it depends on the vertical. It depends on the data that you have. It depends on the data that your competitors have. But my clients get at least a 3X return on investment.

AM: What’s the thinking behind the 3X factor? Can you tell us how you got to that figure and what it means in terms of articulated results?

AV: In the COVID phase, we had to redefine our offering and we started to more aggressively work on the ecommerce vertical.

And there was a reason why ecommerce was booming. It became an opportunity to do SEO in the ecommerce space also because Google wanted to fight against Amazon for getting the attention on all of these transactional queries that before were only the interest of Amazon.

And so Google started to open up the organic results to ecommerce sites by providing free listings. So even though I added support in the roadmap for ecommerce sites during COVID, we had to accelerate at a tremendous pace the support for ecommerce, because Google was going more into these queries and there was tremendous demand because of people staying at home.

And so at that point, in order to evaluate the impact, it wasn’t just about clicks and impressions as it was before. So I also had the opportunity to look at the purchase order on the pages where we apply our advanced linked data markup.

And so we started to work on A and B testing with the data science team from the clients to evaluate the impact not only on the search traffic, but also on the actual purchases. And then we could see that in terms of additional sales, we could create an impact.

AM: Can you talk about the metrics that might be relevant to what you just said from the customer side, or who’s seeing the ROI impact?

AV: Of course, calculating the ROI on an editorial website is a completely different game, because you have to take into account how the website is monetizing traffic. And therefore, in that context, we would feel more confident to look at the increase in organic traffic. But again, we would apply A and B, testing and casual impacts in order to make sure that we can isolate external factors.

Because the problem is that at that point, we started to sell SEO as a product, which is what we do now. And so in order to sell it as a product, we had to work on the way in which we could measure this impact.

And if it’s an editorial site, we have to look at the clicks and impressions or any web metrics that the client is measuring. In some cases, it can be, I don’t know, the number of tickets that the client is selling with his editorial content.

In some other cases, it can be the number and the quality of the leads that they are acquiring through the organic channel. So, depending on the use case, we have to be very clear on what we can measure.

And, of course, not every site is a good fit. If your monetization strategy is not strong enough, we may not be a good solution. We might be too expensive. On the contrary, if you are, let’s say, generating leads at, I don’t know, 30 euro per lead, then organic can deliver a 3X return.

So depending on the business value that the trust it creates, then we are capable of looking at the impact.

We’ve been talking on the enterprise side here for the most part. What about the personal side?

I help run a personal knowledge graph working group. I started it with George Anadiotis, who has published a book with Ivo Velichkov on that topic. Our thinking was, to begin with, that we need to get more people involved in thinking about knowledge graphs and how to contribute to them, regardless of whether they’re actually builders or not.

Ivo was keen on how to do his own structured version of Roam Research-style note taking. Some of that book has to do with that kind of use case.

But I sent you sort of a story about a comic who is just getting started and they want to do their own promotion and have their own web presence. And for those kinds of people who are just trying to bootstrap their own brand, how does this kind of thing fit in with what they’re trying to do?

Is it too much of a stretch? Is there a place where they can land that’s easier for them? What would you say to somebody like that?

AV: We have a lot of cases like that. On the personal branding front, we have created, as a matter of fact, a strategic partnership with Jason Barnard of Kalicube, specifically to address the personal branding area.

And let’s look at a few cases that we’ve worked on in the past. Matt Artz is one of the anthropologists who does a very interesting research on how anthropology can impact user experience studies. And he came to us after listening to a podcast by Jason Barnard to say, hey, can I build a knowledge graph on my website?

Would these help in creating a knowledge graph panel and therefore providing better, stronger authority in the academic sector as well as in the business sector? And we’ve been working with Matt since then, and I met him actually for the first time in real life a few months back when I was in New York for the Knowledge Graph Conference.

And I had the pleasure to invite him. Big is, besides being a brilliant researcher. He has been also following the strategy of building a knowledge graph on his own personal website and promoting his personal podcast on anthropology and UX.

And he had seen, as soon as he got into the Google knowledge graph, he was able to get more visibility and to grow his network. So building a knowledge graph is not per se an enterprise effort, quite the opposite.

I mean, everyone should create a knowledge graph much like everyone should create a website. And how can this be done? In a way, the practice is similar. To the information architecture strategies that you would apply to your own project.

So are you going to create a section about your books? Are you going to create a section about your biography? Are you going to create a section about your other projects? How are you presenting yourself to others in a digital ecosystem?

I mean, in a way, we’re back to the point where Ted Nelson started. How do we connect things? Things, how do we connect concepts? And so we have a lot of these cases where people, individuals, came to us to build a graph.

And then they have seen the benefit of building a knowledge graph because Google has been able to do the disambiguation. If you look at me, for example, I’m called Andrea Volpini, but there is a very famous, worldwide famous tennis player called Andre Volpini.

And then there is an Olympics champion and swimmer that is also called Andrey Volpini. And then there is a musician called Andre Volpini. I mean, even with an Italian name like mine, there are at least four notable people with the same name.

So how do we help the search engine understand who’s who and does that create an impact? Well, sure it does, because you can now ask the search engine, how old am I, and what are the organizations that I have contributed to, and what is my mom’s name?

And of course, if you apply these as a creator or on a top manager, you’re going to see a tremendous impact. Yeah.

AM: I even remember back in the day, maybe this was in the 2010s, just before or after knowledge graph was announced at Google, I remember filling out a form to disambiguate my web persona.

AV: Right. And so they were reaching out to users, individual users, to do that at that point. And now it’s possible for you to just do that sort of thing yourself and do it your own way.

We’ve also done a lot of work to help. When possible, when it makes sense to connect your entity with the equivalent entity on Wikidata or to publish the same data that you create with wirelift on DBpedia.

We are part of the DBpedia data bus, which means that if there are concepts that a client is publishing on its own website that we think could contribute to the broader knowledge, we would publish it into DBpedia with links back and forth from DBpedia to your personal knowledge graph and vice versa.

And until now though, the primary use case was feeding Google and then starting to use this data on the information architecture of your website. Because in our product we started to create widgets for enticing people to click from one concept to another to display maybe context card on a specific term in order to help people understand what we meant when we described, I don’t know, semantic web or any other concept that we want to care about.

But right now it’s even more interesting having this data. And it’s interesting not just for the large organization, but also for the individual. Because if I have a well-curated knowledge graph about the things that I have working on, then I can create an assistant that is fed in context with structured data.

So ontology prompting and fine tuning models with knowledge graphs. It’s what we have started to do since 2020. It’s now been over three years now. And we had tremendous success with structured data.

Finally, it’s becoming way more important than Google itself. And until now, I had to prove the return of the investment primarily by looking at metrics from a third party that I do not have, in the end, any control.

I can now show you that I can create better content if you have better data.

AM: Let’s bring in the elephant in the room, which is so-called AI, and your product is AI enabled.

And let’s think about your own personal information. How do you assert ownership of that information? How do you make your own information? You talked about how to make it more authoritative, but there really has to be territory that you stake out yourself.

If you’re like the anthropologist you mentioned, you want custody of this sphere of information that you’re creating, and you want other people to have access to it. But you might want to impose certain restrictions on the access as well.

Have you been thinking along those lines? Have people been asking you about that?

AV: Only to some extent. So we know that we can apply a license to your knowledge graph. And this is helpful for understanding what can be done with this data that you’re making available.

But this year we are also starting to invest more on creating triples that are for private use only. Because until now Google was for us and Bing and the other search engines were for us the primary data consumer for structured data.

But now I see the advantage of keeping something for yourself only that might not be made available to others. And the reason is that in one of my latest blog posts I created an agent that represents me as author of the blog post and that allows you to chat with the content of the article and with myself as author.

And I fed the system with everything that I’ve written on the blog, my personal entity on the knowledge graph of the blog and that specific article. So there is an agent created with this orchestrator framework.

In that specific case I’m using Langchain, which has an index for the content of the article, an index for the previous content that I’ve written on the blog post and then a system prompt that taps into who I am and so my entity page content. Therefore I’m creating an agent that.

acts as Andrea Volpini. And if you ask, who are you? It will say, I’m Andrea Volpini. I’m the CEO of WordLift. But then you can also ask, okay, Andy, I don’t want to read the article. It’s too complicated or too long.

Tell me, what is this for–a travel agency or an ecommerce brand? Explain to me in layman’s terms, for this specific target, what you’ve written about out here. And therefore you realize that the generative web, it’s creating tremendous opportunity, but it’s all about the data.

And I like a lot this image that Tony Sealel shared the other day, that AI is a tip of an iceberg, but the data underneath is what’s creating the actual wow effect. Yeah, exactly. And it seems like trying to get all these different technology tribes to work in the same direction on this stuff is a huge challenge.

I know something about the Internet identity folks from the IIW workshop here in the Valley, and very smart people working on very smart things, but they’re not working on what you just talked about.

You know what I’m saying? And it’s like, how do we get all these people together to make a bigger impact than we’re making? How do we get the tribes to work with each other better? We went through the middle age on the web, and we haven’t clearly realized it, that as Facebook and Google and the other technology stacks emerged, we actually destroy the ability of the web to become an interconnected ecosystem. And we devalue the power of interoperable standards. Web identity, it’s a beautiful standard, but the problem is that it’s an island.

Because at the moment we kind of felt this connected and the same applied, of course, to our leader world of SEO. Why shouldn’t Google start to make its own entity in the Knowledge Graph dereferenceable and accessible to the using link data standard? If linked data is what they use in the end to organize their information and to improve their information?

But we went through a middle age. We went through a period of time where the corporate interest made it such that there wasn’t any interest in sharing data and there was no interest in sharing standards.

And I do have hope that this will change. We cannot go through the middle age in the era of AI simply because it would give tremendous power to few and limit others from prospering on the web.

AM: Which leads us to another initiative that we have percolating in our personal knowledge graph working group. I don’t know if you know Gyuri Lagos, but Gyuri is working on what he calls the IndyWeb and the IndyHub.

He is trying to enable an open peer-to-peer environment that’s based on the Interplanetary File System (IPFS) that makes a lot of the collaboration environment automated. Of course the IPFS is not mature yet, and it’s not really working as well as the traditional web works.

Any thoughts on so-called decentralized web? Or the kind of approach where you could start with your own environment and have your own access control?

AV: In a granular way. I think that the Solid project by Tim Berners Lee is something that we are starting to get ready for. It goes in the direction of decentralizing the architecture. Because in the end, as you mentioned, we have to start from the infrastructure before we get up to the different layers into the apps.

If the infrastructure becomes decentralized, then it’s easier to kind of share the value across the different peers. So I can see that the evolution towards a decentralized semantic web is becoming a need.

But of course there are a lot of other forces that are coming into play and I think that still people are underestimating the value of knowledge in the era of large language models because large language models have been attracting a lot of attention these days.

And then we kind of lose again, the fog is on. Where is the data coming from? Who owns the data? Where is the lineage of this data? How do we track the usage of this data? What is fair use? What is not fair use?

And we work with a lot of publishers and of course we represent a company that does AI. So good or bad, we do have a lot of visibility, but we do also get a lot of concern and especially with creators and publishers.

We went through in the previous month through real fear. Because if all of a sudden people can generate your content because a model has been trained without you knowing it, that it’s your content, then it gets scary because it’s obscure.

And there is also a misalignment between the creator and the user of the data. And so again, we need a decentralized web and we need something like Solid simply because in the context of artificial intelligence, we can’t let a few companies control the Internet.

AM: Yes, we’re talking at a level where the engineers are trying to do work and collaborate and create the kinds of alignment that you’re speaking of.

But let’s just bring this back to the individual user one more time here.

If they’re reading and thinking about these things, what would you say to that person? I mean, say they’ve got a presence, but it’s not structured at all. The person doesn’t know if it’s fulfilling the goals that the person has set out for their content.

How would you start if you were that person?

AV: So in the 90s or early 2000, I would say to that person that he or she needed a website and couldn’t build his own digital presence only. Through services provided by others.

And right now, what I would advocate and what I would suggest is that any individual that wants to create an impact needs to create content and data at the same time. Because without data, you are going to be controlled by others that will train better models and would absorb whatever ability or expertise you might have.

But if you start by creating your data, then it’s like you are piling up some value or you’re creating some assets. You can immediately create value with SEO, which is a good starting point.

Still remain SEO driven and focused. Because I realized that SEO for me has been the way in which I could start to build semantic web cost effectively.

And the same applies to an individual. I mean, are you using advanced SEO techniques for creating an impact? Because if you are not, you’re missing out. And if you are, then most probably you are extensively curating data because this is how SEO works in 2023.

AM: It sounds like the so-called metadata that you’re creating is the key to your own personal control over what you’re creating.

AV: Yes, and you can measure the impact even at the individual level because the features that you will trigger on Google, on Bing, AI are so relevant for your personal reputation that you would see an immediate reward for that work on structuring content and metadata at the same time.

So that’s for me, the starting point. Then you will realize that you’re not really working for Google or for Bing or for whatever chatbot you are targeting. You are actually creating the foundation for your personal growth.

Because with that data, then you can train your own system. Then maybe you can work in context and personalize or digital assistant and whatnot? Yeah, it makes a whole lot of sense. It really does.

Um, I’ve seen a graphic that Moz [published in their tutorial on how to get started with SEO. And Schema.org is at the top of the pyramid, and then there are other things that create the foundation of the pyramid.

So you’re creating your content, but the schema is at the top. Or how would you say it?

AV: With traditional SEO, schema is an advanced technique because it’s complicated, especially if it’s not automated, and you have to write it by end it’s not the starting point.

If your website has indexing problems, there is no advantage in creating a schema. So whether I could agree at some level that the schema is at the top, I could also start with the schema at the bottom.

Because when we create an experience online, whether today we can say it’s a chatbot or it’s a website, we start from architecting information, and we start by looking at the personas that will access this information.

When doing so, using a vocabulary such as a schema provides you with the insights on how you want to organize your content.

So Schema is also helpful for helping people think in terms of information architecture. And so you will have Schema at the beginning and not just at the end.

AM: You made me think of IMDb, the film and video media database. It has all sorts of information about the different roles performers have played over the years, but also whether they’ve been an actor or a director, etc.

It’s kind of fantastic. But what is missing from IMDb that could help actors build up their presence and their brand? I think IMDb is the de facto standard in the industry for that type of information.

AV: But of course, how accessible is that data? How interoperable is that data? How connected is that data with other data? They could have done a lot more there. If you think about it, Wikidata has had a way larger impact simply because it’s using open standards and it’s accessible.

For IMDb, it’s still privately owned and of course it’s managed like a private database. There are some projects that kind of make that data accessible through linked data, but they could have done more in sharing that piece of information within other pieces of information, I think.

But that’s the principle. I mean, imagine if you could describe like IMDb does your work experience on your site, and you can do that using an interoperable standard or like Schema. What does that mean?

It means that at the moment, if you need to create a quick and easy description of or biography using GPT, you can feed into the Schema class or the Schema attributes and then you can ask the model to describe your working experience to a six year old or to another creator or whoever.

But if you do not have that data organized and structured every time, you will start recollecting memories on what? What you have worked on in the past? For that reason, you will not create a sustainable system for sharing your expertise.

AM: Yes. I know that Kingsley Idehen of OpenLink, for example, posts a lot on Twitter about how he’s generating this entity schema information. And I’m wondering if you’re just getting started, how do you do something in a more automated fashion to start building the graph?

AV: I think that we’re currently trying to approach two sides of the coin. So on one side, we want to use language models for accelerating the growth and the nurturing of the knowledge graph that we create.

Because creating and maintaining a knowledge graph costs money. By using a language model, we can more quickly extract structure from the structure compared to the old NLP technique that we used.

On the other side, we are using the structured data in the graph to fine tune language models. So private, dedicated language models that are trained with your data and your content. And by combining these two processes, we can, you know, very easily create thousands of FAQ answers or hundreds of thousands of product descriptions.

So use cases where there is a lot of volume and there is a lot of need for quality. But at the same time, there is no economy in having someone writing pieces of content again. The point is, you need to have control over your data because it’s a game changer.

Now, you can start to see that it’s a game changer in SEO, but then you will realize that it’s a game changer in anything that you will do with content or with search within your strategy. Yeah, today.

AM: Andrea, thanks so much for edifying me and our audience.

AV: Thanks to you, Alan.

Data Modernisation and Monetisation: A Closer Look at Publicis Sapient’s Trailblazing Initiatives and Strategies

Amid rapid technological advancement, the buzzword playing on everyone’s lips is “data modernisation.” As companies grapple with the ever-increasing volume, variety, and velocity of data, the need to harness its power and derive meaningful insights has become paramount. Analytics India Magazine spoke to Deepak Kumar, senior director data engineering, at Publicis Sapient, who shares the company’s customer-centric data modernisation plans and its relevance in today’s industry.

“Data modernisation is crucial for building a customer-centric culture. And it’s not just about providing a good service to the customer but the ability to understand customer situations. Data management, essentially, is to give a platform that serves as a ‘one-stop-shop’ to everyone in the organisation or a stakeholder so they can easily access, manage and analyse the data for crucial insights that drive provide more data-driven outcomes,” he said.

Kumar further emphasised on the genAI pivot in today’s era. “Organisations need to prioritise and embrace a data-centric approach to move forward with agility. Gone are the days of relying on outdated data. For businesses to be able to leverage genAI, it’s crucial to have up-to-date and relevant data. Only when they do this is when they can benefit from advanced analytics and cutting-edge experiences powered by genAI. What organisations truly need to accomplish this, is dynamic and real-time data that can be seamlessly integrated into their existing platforms, catering to both their brand and their customers. This is why data modernisation plays a pivotal role in establishing a customer-centered business. By ensuring that data is constantly in motion, businesses can harness its full potential and stay ahead in today’s fast-paced world,” he added.

Today, several traditional companies hesitate at the thought of digitising their business. “They don’t have the resiliency because of legacy architectures and tools,” said Kumar highlighting the need for digital transformation. From traditional industries to cutting-edge startups, organisations are recognizing data as the new currency, and those who can effectively leverage it will gain a competitive edge in the marketplace.

“Businesses are facing an unprecedented level of volatility and disruptions that have occurred over the past few years. Staying competitive and resilient, requires companies to embrace digital transformation and modernization. These initiatives not only mitigate risks but also unlock newer opportunities for growth. While there may be some challenges to overcome, such as managing maintenance costs, it’s important to view them as stepping stones towards achieving long-term success. With the introduction of stricter data privacy and security regulations, organisations must also ensure they strike a balance between compliance and operational efficiency. Moreover, considering the impact on revenue and industry demand becomes crucial as businesses navigate through these transformative times,” he added.

Publicis Sapient helps its clients in every step while digitising their data but there are several hurdles which need to be overcome in the process. “One of the key challenges we tackle for our clients is bridging the gap between data modernisation and monetisation,” said Kumar.

“It’s not uncommon for organisations to struggle to effectively utilise their data, not realising the potential it holds. We believe that data can be leveraged both internally and externally, allowing organisations to generate value by sharing it with accelerators and other partners. Implementing robust data models lets organisations unlock the power of their data and gain greater visibility for various purposes,” he suggested.

Externally, there are massive shifts in terms of regulatory compliance and Publicis Sapient is helping customers with data privacy and data sharing because most of them are not ready or have slow and disjointed machine learning, he added. Most recently, the company was also named the market leader in Data Modernization Services by HFS Research.

Vision and Strategy

Being a renowned leader in digital business transformation software, Publicis Sapient has demonstrated a strong commitment to driving digital innovation for its clients over the years. The vision was inspired by Nigel Vaz, CEO of Publicis Sapient, and his book titled “Digital Business Transformation.” Vaz introduced a unique philosophy known as SPEED, which has since guided the organisation’s approach to transformation.

SPEED is an acronym for Strategy: developing and testing your hypothesis on priority value pools; Product: evolving at pace and speed; Experience: how you enable value for customers, Engineering: delivering on your promise, and Data: validating your hypotheses and uncovering insights for constant iteration.

“Another notable aspect is our internal solution isolators or IP. We have developed a comprehensive solution accelerator and toolkit, which we call the PS inner source This internal resource empowers our teams by expediting development in various data functions including integration, management, governance, and quality. These accelerators and toolkits provide a solid foundation and streamline the process of building necessary data foundations based on specific requirements. They play a crucial role in facilitating efficient and effective development within our organisation,” explained Kumar.

Training and Best Practices

The technology company holds many workshops and initiatives to upskill cross-skilled people. They also have a leadership development programme to make IT leaders shine, said Kumar. “A lot of internal initiatives that we have created talk about self-levelling C paths. We encourage them to become certified professionals in technology so we have a pre-approved list of technologies in the data space where people can see what they certainly want to do,” he added.

Suggesting the best practices for organisations in the industry, Kumar said, “Customer-centric guide is all about understanding customer expectations, perceptions and what we need to build Customer 360. If we talk about data models, our statement is interface-centric around data monetisation and production isolation, basically how an organisation can have more understanding and control of their data.”

In conclusion, Kumar said that personalisation, hyper-personalisation and security are the best practices to think of when offering customer-centric data models.

The post Data Modernisation and Monetisation: A Closer Look at Publicis Sapient’s Trailblazing Initiatives and Strategies appeared first on Analytics India Magazine.

Navigating the Learning Curve: AI’s Struggle with Memory Retention

As the boundaries of artificial intelligence (AI) continually expand, researchers grapple with one of the biggest challenges in the field: memory loss. Known as “catastrophic forgetting” in AI terms, this phenomenon severely impedes the progress of machine learning, mimicking the elusive nature of human memories. A team of electrical engineers from The Ohio State University are investigating how continual learning, the ability of a computer to constantly acquire knowledge from a series of tasks, affects the overall performance of AI agents.

Bridging the Gap Between Human and Machine Learning

Ness Shroff, an Ohio Eminent Scholar and Professor of Computer Science and Engineering at The Ohio State University, emphasizes the criticality of overcoming this hurdle. “As automated driving applications or other robotic systems are taught new things, it's important that they don't forget the lessons they've already learned for our safety and theirs,” Shroff said. He continues, “Our research delves into the complexities of continuous learning in these artificial neural networks, and what we found are insights that begin to bridge the gap between how a machine learns and how a human learns.”

Research reveals that, similar to humans, artificial neural networks excel in retaining information when faced with diverse tasks successively rather than tasks with overlapping features. This insight is pivotal in understanding how continual learning can be optimized in machines to closely resemble the cognitive capabilities of humans.

The Role of Task Diversity and Sequence in Machine Learning

The researchers are set to present their findings at the 40th annual International Conference on Machine Learning in Honolulu, Hawaii, a flagship event in the machine learning field. The research brings to light the factors that contribute to the length of time an artificial network retains specific knowledge.

Shroff explains, “To optimize an algorithm's memory, dissimilar tasks should be taught early on in the continual learning process. This method expands the network's capacity for new information and improves its ability to subsequently learn more similar tasks down the line.” Hence, task similarity, positive and negative correlations, and the sequence of learning significantly influence memory retention in machines.

The aim of such dynamic, lifelong learning systems is to escalate the rate at which machine learning algorithms can be scaled up and adapt them to handle evolving environments and unforeseen situations. The ultimate goal is to enable these systems to mirror the learning capabilities of humans.

The research conducted by Shroff and his team, including Ohio State postdoctoral researchers Sen Lin and Peizhong Ju and Professors Yingbin Liang, lays the groundwork for intelligent machines that could adapt and learn akin to humans. “Our work heralds a new era of intelligent machines that can learn and adapt like their human counterparts,” Shroff says, emphasizing the significant impact of this study on our understanding of AI.

How Many Jobs has AI Actually Gobbled Up?

“AI will not replace you, but someone who uses AI will.”

Like a cautionary sign stuck on hazardous substances, the above line has been floating around for quite some time now. Economists and tech leaders have been ringing the death knell for a while cautioning about AI, which is looked upon as a potential reason for an impending apocalypse in the job market. With predictive stats on how certain jobs would be gone in an ‘xyz’ timeframe, a reality check would keep anxieties in check. So what’s the brouhaha all about? Has AI, or someone using AI replaced you?

The media: "AI is replacing 300 million jobs"
The jobs it’s replacing: pic.twitter.com/HAh5XjwGan

— The Agent Suite (@TheAgentSuite) July 19, 2023

Executive outplacement and career consulting firm Challenger, Gray & Christmas, attributed 4000 job losses in May to artificial intelligence, making it the first time for the company to mention AI as a cause of job loss.

Even though there have been massive layoffs owing to recession, including at big tech companies such as Microsoft, Meta, and others in the past six months, none of them have pointed to AI as a cause of it. However, observing the trend at those same companies on how each of them is adopting generative AI in their workflow, it is not difficult to connect the dots. But, having said that, the trends in the job market are reflecting another picture.

Generative AI Fuels Job Market

With massive adoption of generative AI in enterprises, which is either fuelled by anxiety or enthusiasm, AI has indeed kick-started a job evolution in the market. As per AIM Research, the generative AI job market has witnessed a steady growth from January to June of this year. Generative AI-related job postings in the United States are said to have risen by 20% in May. From 3000 job openings in April, the job counts have risen to 4500 jobs in June. The IT sector has observed the highest number of job positions for generative AI roles.

The figures may be indicative of jobs not seeing a decline, but job roles have been modified to suit the current wave. For instance, the role of a ‘generative AI engineer’, a role that never existed before, will require skills of engineers equipped in different fields. Therefore, the role of a generative AI engineer will encapsulate the roles of a deep learning engineer, ML, NL and also a software engineer.

New roles that probably didn’t exist earlier are also sprouting in full vigour. The role of prompt engineers — a result of the chatbot revolution — has witnessed an uptick with companies increasingly seeking such roles. With the role offering salaries higher than Python developers, the job market is only looking positive. The massive shift brought by AI has also sparked a debate on how an entire generation will study for jobs that won’t exist. It might not be an overstatement to say that certain job roles may become redundant, but it is mostly because the nature of the job role is seeing a change.

Together We Grow

Enterprises are approaching the generative AI rage in a coalition of sorts — not via replacements, but through implementation and training of their employees to tame the system. TCS, which initially partnered with Google Cloud for their generative AI services, recently partnered with Microsoft Azure to train 25000 engineers on Azure OpenAI.

In addition to implementing and training, enterprises are also building ways to support other companies thrive on generative AI. Tech Mahindra, which partnered with Microsoft to enable generative AI-powered enterprise search, unveiled their Generative AI Studio to help other enterprises kickstart their efforts in generative AI. Other IT companies have also followed suit.

AI Over Humans?

While most companies have found ways to work around the generative AI job buzz, there have also been companies that have openly embraced AI over human resources. Dukaan, a platform for enabling merchants to set up their e-commerce business, recently laid off 90% of their support staff replacing them with their new AI chatbot. The company claims to have saved cost and reduced customer resolution time since the move. Telecom company British Telecommunications said that over 55,000 jobs will be cut by the end of the decade, out of which a fifth will be in customer service where AI will replace staff.

There are also industries that have no choice but to embrace generative AI — travel industry being one of them. The industry that faced the biggest impact owing to the pandemic, is now slowly breathing and they all have integrated AI chatbots, ChatGPT plugins and other features — AI being the saviour.

While you have companies and industries relying on AI, Zerodha on the other hand is all out to safeguard its employees from any form of AI job takeover. The company has been clear on its stance to adopt AI only if they deem necessary and it will not be at the cost of someone’s job. With a few companies allowing AI to replace jobs, and many others creating new jobs and also embracing AI to empower their employees to effectively use it without posing any threat to their jobs, it is fair to say that AI is becoming integral in all jobs. However, whether it will be a deciding factor to safeguard one’s job is not conclusive.

The post How Many Jobs has AI Actually Gobbled Up? appeared first on Analytics India Magazine.

Unlocking the Power of Numbers in Health Economics and Outcomes Research

In health economics and outcomes research, the availability of data is a critical challenge, given obtaining appropriate data, particularly for long-term outcomes and cost statistics, can be difficult. Furthermore, the quality and consistency of data from different sources may change, making it impossible to confirm the results credibility. Complex designs and procedures are frequently used in HEOR studies to answer unique research questions. Choosing the right study design, such as observational studies, randomized controlled trials, or modeling approaches, necessitates significant thought.

The selection of proper statistical methodologies, sample sizes, and endpoints introduces additional obstacles that can have an impact on the validity of the results. Economic modeling is critical in HEOR because it estimates long-term costs, results, and cost-effectiveness. Developing robust economic models, on the other hand, necessitates making assumptions and simplifications that may create uncertainty and bias. Transparency in modeling assumptions and testing model outputs with real-world data is critical but difficult. To address these quantitative issues in HEOR, economists, statisticians, epidemiologists, doctors, and other relevant professionals must collaborate together. To improve the rigor and trustworthiness of HEOR research, it also demands continual methodological breakthroughs, data standardization efforts, and robust statistical studies.

Addressing the Challenges Through Statistics

Quantitative challenges in health economics and outcomes research can be effectively addressed through the use of statistics. Statistics can offer important insights into many facets of healthcare, including patient outcomes, treatment efficacy, and cost-effectiveness, through analyzing and interpreting data.

In order to better inform decisions and enhance healthcare delivery, researchers might use statistical approaches to find patterns, trends, and links in massive datasets. Statistics are essential to the advancement of health economics and outcomes research, whether they are used to assess the effects of a new treatment or the efficacy of a healthcare intervention. When it comes to tackling the quantitative issues that are present in health economics and outcomes research (HEOR), statistical methods are absolutely essential.

Researchers are able to conduct complicated data analyses, evaluate the effects of treatments, and make well-informed judgments with the help of these tools. Statistical methods such as regression analysis, survival analysis, propensity score matching, and Bayesian modeling are helpful in determining associations, controlling for confounders, and estimating treatment effects. Other statistical methods include survival analysis and Bayesian modeling.

In addition, advanced modeling techniques such as cost-effectiveness analysis and decision trees help make it easier to conduct economic analyses and make judgments regarding resource allocation. HEOR studies have the potential to improve the accuracy, reliability, and generalizability of their findings by making use of powerful statistical tools. This will ultimately lead to an improvement in healthcare policy and practice.

Below we explore two of the methods which are pivotal in evaluating the impact of healthcare interventions from an economic perspective.

Markov Chains

Markov chains can be an excellent technique when creating cost-effectiveness models. Markov chains can provide light on how different variables affect the total cost of a system by simulating the changes between various states over time. A Markov chain, for instance, can assist in estimating the long-term cost of treating a particular disease by simulating the transition of patients between various health stages.

In Figure 1, we have a comparison of a disease transition probability diagram with and without any treatment intervention. Initially, we can observe that the probability of transition from stage 1 to stage 2 is 0.3, from stage 2 to stage 3 is 0.4, and so on. However, when treatment is introduced after stage 1, we can observe the transition probability from stage 1 to stage 2 reduce to 0.1 and if treatment is continued through stage 2 it reduces transition probability to stage 3 to 0.1 as well thereby affirming the efficacy of the treatment/drug. Hence, we can conclude that the treatment helped reduce the probability of disease progression to its latest stage by 1/3rd and potentially improved the quality-adjusted life year (QALY) of the patient thereby helping us estimate reduction in treatment cost.

Unlocking the Power of Numbers in Health Economics and Outcomes Research
Figure 1: Markov process based transition diagram

Additionally, the timing of interventions or the choice of treatment choices are two more decisions linked to resource allocation that can be optimized using Markov chains. Markov chains can help to increase the accuracy and reliability of cost-effectiveness models, which will ultimately result in better decision-making in healthcare and other industries by giving a more thorough understanding of the elements that affect cost-effectiveness.

Bayesian Inference

Bayesian inference can be helpful when evaluating the value of healthcare interventions from a financial perspective. Bayesian inference allows researchers to more accurately predict outcomes and evaluate the efficacy and cost-effectiveness of possible interventions by factoring in prior knowledge and information. This method can be especially helpful when data is scarce or insufficient since it allows researchers to fill in the blanks with what they already know. Researchers can enhance the precision and reliability of their cost-effectiveness assessments by employing Bayesian inference, which in turn leads to improved healthcare decision-making and better patient outcomes. Typically, Bayes' theorem is presented as below:

Unlocking the Power of Numbers in Health Economics and Outcomes Research

Bayesian inference is a statistical method that has been gaining popularity in the healthcare industry for evaluating the effectiveness of interventions. Bayesian inference enables a more precise estimation of the likelihood of success for a certain treatment or intervention by taking into account prior information and updatingis a teaching professor at Northeastern University in Boston, teaching classes that make up the Master's program in Data Science. His research in multi-robot systems and reinforcement learning has been published in the top leading journals and conferences in AI. He is also a top writer on the Medium social platform, where he frequently publishes articles on Data Science and Machine Learning. it with fresh evidence.

For example, in a study on the effectiveness of a new drug, Bayesian Inference can take into account not only the raw data but also prior knowledge about the drug's mechanism of action, potential side effects, and interactions with other drugs. This approach can provide more informative and accurate estimates of the drug's efficacy and safety, which can help guide clinical decision-making.

The study of genetic data to find probable illness risk factors is another application of Bayesian inference in healthcare. Bayesian Inference can assist in identifying new targets for intervention and enhancing our comprehension of the underlying mechanisms of disease by combining prior knowledge about the genetic and environmental factors that affect disease risk.

Another example is in the evaluation of healthcare policies and interventions. By incorporating prior data on the effectiveness of similar policies and interventions, policymakers can make more informed decisions about which policies to implement and which to avoid. Overall, Bayesian inference is a powerful tool for evaluating healthcare interventions, allowing for more accurate and informed decision-making.

Additionally, predictive modeling such as linear regression is one of the various ways Bayesian inference may be used in healthcare. Bayesian Inference can assist in making predictions about a patient's health outcomes that are more accurate by taking into account their medical history, symptoms, and other risk factors.

Overall, Bayesian inference is an effective technique for assessing healthcare interventions and can help patients have better results and make better clinical decisions by giving more precise and detailed predictions about the outcomes of their health.
Mayukh Maitra is a Data Scientist at Walmart working in the media mix modeling space with more than 5 years of industry experience. From building Markov process based outcomes research models for healthcare to performing genetic algorithm based media mix modeling, I've been involved in not only making an impact in the lives of people but also taking businesses to the next level through meaningful insights. Prior to joining Walmart, I've had the opportunity to work as a Data Science Manager in GroupM in the ad tech space, Senior Associate of Decision Science in Axtria working in the domain of health economics and outcomes research, and as a Technology Analyst in ZS Associates. In addition to my professional roles, I’ve been part of jury and technical committee for multiple peer reviewed conferences, have had the opportunity to judge multiple tech awards and hackathons as well.

More On This Topic

  • Synthetic Data Platforms: Unlocking the Power of Generative AI for…
  • Use third-party data to increase user engagement and deliver business…
  • Learn modern forecasting techniques to help predict future business…
  • 21 Must-Have Cheat Sheets for Data Science Interviews: Unlocking Your Path…
  • Amazon Web Services Webinar: Leverage data sets to create a…
  • Exploring the Power and Limitations of GPT-4