“Most video assets are hugely underperforming,” Todd Carter, CTO of Resolute Square, said in our Personal Knowledge Graph working group interview with him. “I know you all are practitioners used to indexable metadata, but that’s not what we have here.”
Resolute Square (RS) is a Public Benefit Corporation that counters anti-democratic propaganda. While the Voice of America (VOA) is a media organization that has focused on external authoritarian threats since World War II and has been funded by the US Federal government, RS is not a governmental entity. It’s a media company created by The Lincoln Project, a US political action committee (PAC) founded by center-right Republicans first focused on defeating the now former US President Donald Trump when he was first running for reelection.
RS concerns itself with fighting the information battle inside the US, backing the democratic system and defending it against those who push for authoritarianism and flood the media system with disinformation.
A lofty goal, to be sure.
In this interview, Carter describes the challenge of content owners like RS who are trying to boost traffic to their sites. Television networks have the same problem: Much of the content (and the value evident from viewing it) is buried in cryptic digital files with low visibility and discoverability.
Carter’s work focuses on unpacking, exposing (increasing the discoverable “surface area” of a video segmented into Key Moments) and semantically connecting many Key Moments from each video. He’s been collaborating with CEO Paul Wilton, Product Director Matt Shearer and team from Data Language and using their Datagraphs knowledge graph as a service (KGaaS) platform. The result? Users can browse a much more detailed view of what’s in each video and pick the Key Moments most relevant to them.
Interestingly, the methods Carter describes are applicable not only to websites, but also TV content providers. Below are screen captures from hist demo of Resolute Square arranged along the beginnings an interview timeline, to give you a sense of how illuminating this interview and demo was.
4:10 The Resolute Square Website, which looks conventional, but “everything is connected under the hood.”
5:10 A key video moment (3 min. clip) that’s logically connected via content graph to the parent video
6:00 How the moments are exposed in Google search engine results pages (SERPs)
7:25 How the hyperconnected experience looks to the user from a Google search
8:25 Classes and properties of the content–view of the CMS
9:00 Datagraphs-enabled domain model
10:00 How a unitary video and its key moments are connected by theme
10:35 Many more discovery vectors when exploring article graph connections to other entities
“When you can correlate user intent and the thing they’re looking for, they get happy.”
Most of us agree that search is broken. It has not changed much in terms of user experience over the last two decades. To make matters worse, due to the SEO/ad driven focus, the results from search are often preceded by advertising.
Gen Z has realised this and are using TikTok and other platforms as de-facto search mechanisms.
Recently, Google announced Google perspectives as a mechanism to bring human perspectives into search results. Perspectives can originate from sources such as Reddit, Stack overflow, YouTube videos etc
Here are some comments on the use of the perspectives feature with regards to AI
The original motivation of the perspectives feature is Gen Z (TikTok) but it also has an impact on AI
Understanding the value of specific perspectives to the search response is not going to be easy and brands will have to increase their social media presence to be visible in this new world.
However, the biggest benefit is, over time, these perspectives could arise from LLMs
However, perspectives are not conversations and they are not interactive. We are already seeing that the conversational experience is pulling people away from the web ex stack overflow traffic has dropped
In addition to perspectives, search results can be directly improved with generative AI
Let’s take a question like “what’s better for a family with kids under 3 and a dog, bryce canyon or arches.” Normally, you might break this one question down into smaller ones, sort through the vast information available, and start to piece things together yourself. With generative AI, Search can do some of that heavy lifting for you.
However, neither of these are conversations – like chatGPT.
However, you get the context carried over from question to question to continue your exploration.
Over time, I see both these features improving as LLMs and generative AI complement search
By Jess Warrington, General Manager, North America, CloudBlue
They say eCommerce is the new normal, but beyond simple selling, it has ushered in the next evolution of B2B transactions. Digital marketplaces enable tech vendors to broaden their reach and expand their catalog of products and services, giving companies the ability to package multiple types of products from an online network of distributors and resellers.
It’s partly an eCommerce experience, but more broadly, a B2B digital marketplace combines “sales” with opportunities to create new sources of revenue through a network of digital marketplaces on a cloud platform. Here are my tips for growing your business by building a scalable digital marketplace strategy.
It’s more than a storefront
Businesses want one-stop shopping just like regular consumers do, and digital marketplaces can work just as well for tech vendors as they do for retailers. Forrester predicts that B2B eCommerce will reach $1.8 trillion and account for 17% of all B2B sales in the U.S. this year.
The traditional legacy business model focused on the concept of one-and-done. A company sold a product and ships that product – end of the customer journey. With digital marketplaces, tech vendors can generate a recurring source of revenue through subscription or pay-per-use models. This allows more opportunities to cultivate and enhance ongoing customer relationships.
Dell and Lenovo, for example, offer more than just hardware when they sell their laptops. Microsoft 365 or Google Workspace is included, along with other productivity services. These extras are something business customers have come to expect – customers buy a laptop, plus they get a subscription for software-as-a-service (SaaS). Why settle for one solution when they can get a bundle of products and services, such as storage or cybersecurity? Simple selling is not enough for tech vendors.
Staying connected through an ecosystem
When it comes to the digital marketplace, the post-shopping cart experience and through the customer life cycle is where this ecosystem brings the most value. It’s about Anything-as-a-Service (XaaS) in this digital-first world. Tech vendors are often dependent on the volume of products or software solutions sold and have more opportunities to increase sales in an online network of digital marketplaces.
Businesses would rather buy outcomes for $1,000 a month and be able to scale on-demand, than invest in ten different solutions up front for $50,000, not knowing when those products will become outdated. With digital marketplaces, tech vendors are creating a more elastic relationship with how they deliver results to their customers. That, in turn, promotes customer stickiness.
Joining a digital ecosystem enables SaaS solutions developers to offer custom add-ons to refine their products and make them more appealing to channel partners. Tech Vendors can build their own multi-channel marketplace structure and manage both their own IP catalog and those add-ons developed by third-party developers. This makes it easier for tech vendors to complement their products and showcase them where their customers are already buying solutions.
Tech vendors who navigate the digital marketplace landscape best utilize a marketplace platform with the technology to manage and integrate all their account subscriptions.
An ecosystem enabler connects digital marketplaces using APIs:
Digital ecosystem management and marketplace platform that enables Tech vendors to build their own ecosystem
Vendors Ecosystem
Providers Ecosystem(Ecosystem Enabler)
Customer Ecosystems
Hardware / IoT Cybersecurity IaaS SaaS XaaS
Subscription, Billing and Order Management Vendor, & Catalog and Listing Management Multi-tier level Channel Partner Management Marketplace interfaces
SMBs Enterprises Channel Partners Resellers End Consumers
It’s too complicated to go it alone in the digital marketplace landscape. Tech vendors can’t just set up an online store and wait for sales to come in. The main public cloud providers, including AWS, Azure and Google Cloud Marketplace, offer off-the-shelf cloud infrastructures that are highly valuable for business customers. Tech vendors can use a platform to automate every aspect of catalog fulfillment and management as they scale into new markets.
New business models
The B2B digital marketplace also opens the door to new opportunities. Consider a new business model referred to as B2B2X. This business model allows one business to use technology to sell its products through another business’ existing relationships with end customers or businesses, and the X factor can be a consumer, another business, or a public agency. Tech vendors can broaden their reach. Companies can go from simple selling to selling through a digital marketplace and selling with B2B2X platform models.
Investing in subscription and pay-per-use models will set tech vendors on a smoother path forward. The automated provisioning and management of services based on subscription models allows frictionless communications with Tech Vendor Solutions and allows the complexity of the billing reconciliation to be handled in the background while providing accurate real-time insights into customers’ services, projects, and profitability. At the same time, a marketplace platform puts tech vendors where they need to be – connected to an online community that allows businesses to take advantage of marketplaces filled with their target audience from vendors and distributors on to resellers.
At the most basic level, a Tech Vendor that provides a digital ecosystem management platform becomes a hub that creates and manages many different marketplaces, each one with its own back-end process, product catalog and information. All those marketplaces feed into one platform and tech vendors only have to deal with one channel management interface. It’s a way to manage and orchestrate their multi-partner ecosystem.
End-to-end automation of every step in the value chain is key. The automated management of subscription models provides frictionless communications with ecosystem partners. It allows businesses to connect and automate workflows across different systems, applications and data sources.
The healthcare industry relies heavily on accurate claims auditing to ensure proper reimbursement and financial stability. Claims auditors must determine the correct party, membership eligibility, contractual adherence, and fraud, waste, and abuse to accurately pay to prepay and postpay healthcare claims. This is a difficult task with many obstacles.
Healthcare reimbursement and financial stability depend on accurate claims auditing. Manual auditing is time-consuming and error-prone. Automated algorithms and AI have transformed claims auditing. AI-assisted medical claims auditing is catching errors and improving financial recovery. We’ll discuss AI’s benefits, risks, and best practices in claims auditing.
The challenges Healthcare is facing
The reimbursement and claims processing workstream are dominated by high-volume, repetitive tasks like collecting and entering patient and provider data. Front- and back-end healthcare staff spend hours inputting data manually, which can lead to errors. We’re only human!
Incorrect billing or patient documentation delays the process. Payers, providers, and patients must communicate to confirm medical claim details. Healthcare workers who must manually complete repetitive, tedious tasks or fix mistakes can’t focus on patient care.
This issue affects payers as well as billers. Error-delayed claims can make Healthcare wary of certain plans and carriers. Benefits brokers can only offer a few options and price points due to a low number of accepted plans. This leaves employers who want to offer good, affordable health plans with few options.
How AI empowers healthcare
As healthcare providers realize the extent of these issues, they are adopting AI solutions to streamline claims processing and reimbursement. AI automates these critical but repetitive tasks to reduce errors, improve workflows, and allow hospital staff to focus on more complex tasks that require a human touch.
Healthcare is using AI in disparate systems to outsource and automate repetitive, high-volume tasks for reimbursement and claims processing, reducing employee workloads and speeding up the revenue cycle. AI’s accuracy eliminates patient entry and pre-authorization claim errors and the resulting back-and-forth communications.
AI reduces the high costs of insurance claim denials. AI helps providers spot and correct false claims before insurance companies deny them. This streamlines the process and saves hospital staff time from submitting the claim after a denial.
With faster payments and greater accuracy, healthcare providers are more confident in their reimbursement timeline and willing to accept more plans. AI lets healthcare accept more plans, giving benefits brokers more options for their clients.
AI Benefits to Medical Billing Processes
Medical claims processing is crucial, but it can be complicated and prone to errors and fraud. AI has improved healthcare by automating administrative tasks and streamlining insurance reimbursements.
Medical Coding AI software can be used for medical claims auditing, coding, and submission. Medical coding AI software improves efficiency by increasing provider approval rates and reimbursement times. AI could cut billing error costs by 8%, saving $96 million. It could save $300 million if widely used and save 25%.
Improved detection of coding errors and fraudulent activities
AI algorithms can analyze medical codes and documentation to identify discrepancies, such as incorrect coding, unbundling of services, or upcoding. By pinpointing such errors, AI-assisted claims auditing improves accuracy and prevents improper payments.
Enhanced identification of billing discrepancies and improper payments
AI can compare billing data with medical records and insurance policies to identify billing discrepancies, such as duplicate claims or services not covered by insurance. This comprehensive analysis minimizes the risk of improper payments and ensures accurate reimbursement.
Streamlined claims review and prioritization process
AI algorithms can prioritize claims based on their likelihood of containing errors or discrepancies. This prioritization allows auditors to focus their attention on high-risk claims, optimizing the use of resources and expediting the claims review process.
AI Error Detection and Financial Recovery
Finding medical claim mistakes:
Errors and discrepancies in medical claims can hurt financial recovery. Common mistakes:
Incorrect medical codes and documentation can result in underpayment or claim denials. AI algorithms can help auditors fix these errors.
Upcoding and unbundling:
Upcoding assigns medical services higher-value codes than necessary. Unbundling separates services that should be bundled. AI can detect billing discrepancies and stop fraud.
Billing errors, such as incorrect billing amounts or duplicate claims, can delay reimbursements or deny claims. AI-assisted auditing can quickly find and fix these errors for proper reimbursement.
Future claims processing
In the digital age, the claims process is evolving and AI is learning to better serve brokers. Hospitals are rethinking the claims process with AI while surviving the global pandemic. According to research, 61% of hospital leaders want to implement AI/RPA within two years.
In the future, hospitals will use AI to streamline backend functions to reduce operational costs, administrative spending, and employee distractions. From payers to brokers, the healthcare ecosystem is realizing the need for speed, efficiency, and accuracy.
AI Claims Processing
Medical claim processing is laborious and error-prone. AI technology avoids costly delays and denials caused by manual data entry! Automation of submission, coding, and analysis speeds up and improves accuracy, relieving professionals of paperwork overload.
AI-powered claims processing systems automatically extract data from EMRs, insurance forms, and other sources. NLP algorithms can extract and analyze data, eliminating manual data entry. The system can code and verify the claim against insurance policy guidelines, speeding up and improving accuracy.
AI can increase approvals and decrease denials by automating claims processing. The system quickly detects data errors, allowing healthcare providers to fix them before submission. This reduces rejected claims, speeding healthcare provider reimbursement.
Fraud Detection
We previously reported $5.8 billion in healthcare fraud. What if AI reduced 8% or 25% of fraud? $464 Million–$1.5 Billion in savings!
Fraudulent medical claims cause insurance companies to lose money and raise premiums, while healthcare providers and patients are frustrated by long processing times. Successful claim management requires effective fraud detection.
AI can detect insurance fraud quickly. Advanced machine learning algorithms compare claims data to historical databases for anomalies, patterns, and discrepancies. This powerful system helps insurers prevent costly fraud.
Predictive analytics can help AI-powered fraud detection systems spot potential fraud. The system can detect fraud patterns in social media and other data. This allows insurance providers to proactively investigate potential fraudulent claims, reducing fraud.
AI helps healthcare providers and insurance customers get reimbursed faster and cheaper. These benefits are achieved by identifying fraudulent claims, which streamlines payments for legitimate claims and lowers premiums.
Predictive Analytics
Medical clinics can avoid appeals by using predictive analytics. Manual entry requires additional staff. US Medical Coders cost $45,000–$65,000 annually! What seems better? hiring 1-3 billers at $4,000 per month or licensing an AI medical coding software that helps your staff for $99–$499?
AI-powered predictive analytics can help insurers determine claim approval or denial. Machine learning algorithms analyze claims data to predict approval or denial. These systems can also identify approval or denial patterns in historical claims data. PCG Software’s Virtual Examiner does that.
Predictive analytics helps insurance companies process and reimburse healthcare providers faster by identifying likely claims. By identifying claims that are likely to be denied, insurance providers can work with healthcare providers to correct errors before submission, reducing denied claims. Healthcare providers can reduce errors and improve revenue integrity with successful ar recovery solutions and strategies.
AI Enhances Communication
Healthcare and insurance providers can improve communication with AI-powered systems. AI can automate reminders, notifications, and updates, reducing manual communication. These systems can also use NLP algorithms to understand and respond to patient inquiries quickly and accurately.
AI can improve healthcare provider-insurance provider communication, speeding up processing and approvals. AI-powered communication systems improve patient satisfaction and outcomes by providing accurate and timely information.
AI and Administrative Savings
Administrative and manual data entry make medical claims processing expensive. AI can cut medical claims processing costs by automating tasks. By reducing denied claims, healthcare providers can lower administrative costs associated with appealing them.
Healthcare providers can improve patient outcomes by redirecting administrative costs to patient care. Insurance companies can reduce claims processing costs, lowering consumer premiums.
Conclusion
AI could boost medical claims approvals and lower administrative costs. AI-powered systems can automate data extraction and coding, speeding up and improving processing. AI-powered fraud detection systems also reduce fraudulent claims and increase legitimate claim approvals.
AI can improve communication and reduce administrative costs between healthcare and insurance providers, resulting in better patient outcomes and lower consumer premiums. AI in medical claims processing has the potential to transform the healthcare industry, despite challenges.
Enterprises today are experiencing rapid growth, citing IT observability as an essential business imperative so companies can properly monitor and manage their complex environments. Tune in to the Pursuing Full-Stack IT Observability summit to learn from leading experts how to establish enterprise observability with the right approach and solutions. Learn about the leading platforms and tools to help companies achieve full-stack observability and keep business environments functioning smoothly.
Attackers have many opportunities to strike on-site and cloud-based enterprise applications from early in the development process. But many solutions and tools — such as the emerging DevSecOps framework – are available to better secure applications and ensure security is prioritized within DevOps and application security testing tools. Tune into Effective Application Security summit to hear leading experts discuss how to secure applications in your enterprise infrastructure with strategies like DevSecOps along with the right combination of tools and testing.
TLADS and the Socratic Method: Bill Schmarzo’s Excellent Adventure
Frequent Data Science Central contributor Bill Schmarzo has long touted the “Think Like a Data Scientist” methodology for business decisions. Bill notes that when leaders (and employees) “TLADS,” it provides a framework for value-based problem-solving and data-driven decision-making.
By incorporating business context, stakeholder alignment and the practical application of data science techniques, TLADS maximizes the value and relevance of the analysis, Bill says. This is of course beneficial in the context of data science projects, but can also be used for data-driven problem solving, decision making and KPI development.
For this week’s article, Bill takes it a step further and imagines his TLADS methodology mashed up with critical thinking methods developed by Socrates. The Socratic method encourages critical thinking by asking question that challenge assumptions and uncover inconsistencies. As Bill notes in his article, the Socratic Method helps strengthen human skills that might wane as we increasingly use AI in our everyday lives.
Bill’s article makes the case for blending the TLADS method with the Socratic method. Integrating the two methods adds a layer of critical thinking, promotes deeper analysis, and enhances the decision-making process, Bill says, and leads to more informed data-driven decisions. Insightful and thought-provoking questions will become increasingly crucial to LLM development as AI technology continues to advance. Perhaps mashing up modern TLADS ideas with ancient approaches is the way to responsible AI development.
The Editors of Data Science Central
Contact The DSC Team if you are interested in contributing.
DSC Featured Articles
AI-Assisted Claims Auditing: Uncovering Errors Leading to Boosted Financial Recovery May 23, 2023 by John Lee
How Tech Vendors Can Embrace the Digital Marketplace Reset – Tips on navigating the digital marketplace-as-a-service landscape May 23, 2023 by Berkeley PR
LLM results in search – Google search perspectives and generative AI in search May 23, 2023 by ajitjaokar
Boosting video “surface area” for discoverability with knowledge graphs May 23, 2023 by Alan Morrison
The Future of Facial Recognition: Promoting Responsible Deployment and Ethical Practices May 23, 2023 by Rayan Potter
Top 4 Benefits of Modern Data Quality May 23, 2023 by Vanitha
The Dean Meets Socrates: Mastering the Art of Questioning May 21, 2023 by Bill Schmarzo
How Blockchain Technology is Transforming the Business May 18, 2023 by Piyush Jain
How To Create Enterprise Data Warehouse Software May 18, 2023 by Yuliya Melnik
Getting Started with Apache Flink: First steps to Stateful Stream Processing May 18, 2023 by Rachel Pedreschi
An Intriguing Job Interview Question for AI/ML Professionals May 17, 2023 by Vincent Granville
DSC Weekly 16 May 2023 – LLM success depends on quality, transparent data May 16, 2023 by Scott Thompson
Last year at Microsoft Build, an annual conference event, Azure Deployment Environments was released, a service that enables developers to quickly spin up app infrastructure with project-based templates. After a year of working with over 30 organisations from industries like financial services, retail, automotive they’ve announced the addition of several new features and capabilities. Sagar Lankala, the Senior Product Manager, announced the new features at Build, Microsoft’s annual conference.
What’s new?
Already in public preview, developers can deploy new environments through the terminal in their code editor or as part of a GitOps workflow. Now in general availability, developers can view, deploy, and manage their environments all from a custom developer portal—that also houses cloud-based workstations available through Microsoft Dev Box.
They also announced support for Terraform infrastructure-as-code files in Azure Deployment Environments —customers can sign up for early access today. The service already supports Azure Resource Manager (ARM) templates; adding Terraform support means that customers who use Terraform will be able to directly import their existing templates into Azure Deployment Environments. Support for other infrastructure-as-code formats—including Pulumi and Ansible—is on the backlog.
To enable a seamless developer experience across products, we’re also working on an integration between Azure Deployment Environments and the Azure Developer CLI (azd). With this integration, enterprise developers will be able to leverage azd to provision app infrastructure using Azure Deployment Environments and, easily deploy app code onto the provisioned infrastructure.
The post Azure Deployment Environments is now available for all developers appeared first on Analytics India Magazine.
Generative artificial intelligence is at a pivotal moment. Enterprises want to know how to take advantage of mass amounts of data, while keeping their budgets within today’s economic demands. Generative AI chatbots have become relatively easy to deploy, but sometimes return false “hallucinations” or expose private data. The best of both worlds may come from more specialized conversational AI securely trained on an organization’s data.
Dell Technologies World 2023 brought this topic to Las Vegas this week. Throughout the first day of the conference, CEO Michael Dell and fellow executives drilled down into what AI could do for enterprises beyond ChatGPT.
“Enterprises are going to be able to train far simpler AI models on specific, confidential data less expensively and securely, driving breakthroughs in productivity and efficiency,” Michael Dell said.
Dell’s new Project Helix is a wide-reaching service that will assist organizations in running generative AI. Project Helix will be available as a public product for the first time in June 2023.
Jump to:
Offering custom vocabulary for purpose-built use cases
Changing DevOps — one bot at a time
Behind the scenes with NVIDIA hardware
Should your business use generative AI?
Offering custom vocabulary for purpose-built use cases
Enterprises are racing to deploy generative AI for domain-specific use cases, said Varun Chhabra, Dell Technologies senior vice president of product marketing, infrastructure solutions group and telecom. Dell’s solution, Project Helix, is a full stack, on-premises offering in which companies train and guide their own proprietary AI.
For example, a company might deploy a large language model to read all of the knowledge articles on its website and answer a user’s questions based on a summary of those articles, said Forrester analyst Rowan Curran.
The AI would “not try to answer the question from knowledge ‘inside’ the model (ChatGPT answers from ‘inside’ the model),” Curran wrote in an email to TechRepublic.
It wouldn’t draw from the entire internet. Instead, the AI would be drawing from the proprietary content in the knowledge articles. This would allow it to more directly address the needs of one specific company and its customers.
“Dell’s strategy here is really a hardware and software and services strategy allowing businesses to build models more effectively,” said Brent Ellis, senior analyst at Forrester. “Providing a streamlined, validated platform for model creation and training will be a growing market in the future as businesses look to create AI models that focus on the specific problems they need to solve.”
However, there are stumbling blocks enterprises run into when trying to shift AI to a company’s specific needs.
“Not surprisingly, there’s a lot of specific needs that are coming up,” Chhabra said at the Dell conference. “Things like the outcomes have to be trusted. It’s very different from a general purpose model that maybe anybody can go and access. There could be all kinds of answers that need to be guard-railed or questions that need to be watched out for.”
Hallucinations and incorrect assertions can be common. For use cases involving proprietary information or anonymized customer behavior, privacy and security are paramount.
Enterprise customers may also choose custom, on-premises AI because of privacy and security concerns, said Kari Ann Briski, vice president of AI software product management at NVIDIA.
In addition, compute cycle and inferencing costs tend to be higher in the cloud.
“Once you have that training model and you’ve customized and conditioned it to your brand voice and your data, running unoptimized inference to save on compute cycles is another area that’s of concern to a lot of customers,” said Briski.
Different enterprises have different needs from generative AI, from those using open-source models to those that can build models from scratch or want to figure out how to run a model in production. People are asking, “What’s the right mix of infrastructure for training versus infrastructure for inference, and how do you optimize that? How do you run it for production?” Briski asked.
Dell characterizes Project Helix as a way to enable safe, secure, personalized generative AI no matter how a potential customer answers those questions.
“As we move forward in this technology, we are seeing more and more work to make the models as small and efficient as possible while still reaching similar levels of performance to larger models, and this is done by directing fine-tuning and distillation towards specific tasks,” said Curran.
SEE: Dell expanded its APEX software-as-a-service family this year.
Changing DevOps — one bot at a time
Where do on-premises AI like this fit within operations? Anywhere from code generation to unit testing, said Ellis. Focused AI models are particularly good at it. Some developers may use AI like TuringBots to do everything from plan to deploy code.
At NVIDIA, development teams have been adopting a term called LLMOps instead of machine learning ops, Briski said.
“You’re not coding to it; you’re asking human questions,” she said.
In turn, reinforcement learning through human feedback from subject matter experts helps the AI understand whether it’s responding to prompts correctly. This is part of how NVIDIA uses their NeMo framework, a tool for building and deploying generative AI.
“The way the developers are now engaging with this model is going to be completely different in terms of how you maintain it and update it,” Briski said.
Behind the scenes with NVIDIA hardware
The hardware behind Project Helix includes H100 Tensor GPUs and NVIDIA networking, plus Dell servers. Briski pointed out that the form follows function.
“For every generation of our new hardware architecture, our software has to be ready day one,” she said. “We also think about the most important workloads before we even tape out the chip.
” … For example for H100, it’s the Transformer engine. NVIDIA Transformers are a really important workload for ourselves and for the world, so we put the Transformer engine into the H100.”
Dell and NVIDIA together developed the PowerEdgeXE9680 and the rest of the PowerEdge family of servers specifically for complex, emerging AI and high-powered computing workloads and had to make sure it could perform at scale as well as handle the high-bandwidth processing, Varun said.
NVIDIA has come a long way since the company trained a vision-based AI on the Volta GPU in 2017, Briski pointed out. Now, NVIDIA uses hundreds of nodes and thousands of GPUs to run its data center infrastructure systems.
NVIDIA is also using large language model AI in its hardware design.
“One thing (NVIDIA CEO) Jensen (Huang) has challenged NVIDIA to do six or seven years ago when deep learning emerged is every team must adopt deep learning,” Briski said. “He’s doing the exact same thing for large language models. The semiconductor team is using large language models; our marketing team is using large language models; we have the API build for access internally.”
This hooks back to the concept of security and privacy guardrails. An NVIDIA employee can ask the human resources AI if they can get HR benefits to support adopting a child, for example, but not whether other employees have adopted a child.
Should your business use custom generative AI?
If your business is considering whether to use generative AI, you should think about if it has the need and the capacity to change or optimize that AI at scale. In addition, you should consider your security needs. Briski cautions away from using public LLM models that are black boxes when it comes to finding out where they get their data.
In particular, it’s important to be able to prove whether the dataset that went into that foundational model can be used commercially.
Along with Dell’s Project Helix, Microsoft’s Copilot projects and IBM’s watsonx tools show the breadth of options available when it comes to purpose-built AI models, Ellis said. HuggingFace, Google, Meta AI and Databricks offer open source LLMs, while Amazon, Anthropic, Cohere and OpenAI provide AI services. Facebook and OpenAI may likely offer their own on-premises options one day, and many other vendors are lining up to try to join this buzzy field.
“General models are exposed to greater datasets and have the capability to make connections that more limited datasets in purpose-built models do not have access to,” Ellis said. “However, as we are seeing in the market, general models can make erroneous predictions and ‘hallucinate.’
“Purpose-built models help limit that hallucination, but even more important is the tuning that happens after a model is created.”
Overall, it depends on what purpose an organization wants to use an AI model for whether they should use a general purpose model or train their own.
Disclaimer: Dell paid for my airfare, accommodations and some meals for the Dell Technologies World event held May 22-25 in Las Vegas.
TechRepublic Premium Exclusives Newsletter
Save time with the latest TechRepublic Premium downloads, including original research, customizable IT policy templates, ready-made lunch-and-learn presentations, IT hiring tools, ROI calculators, and more. Exclusively for you!
Looking for a chatbot you can use in Slack to answer questions, provide information, and generate content? One effective AI tool to consider is Claude AI.
Developed by artificial intelligence company Anthropic, Claude is accessible as a website by invitation and through an app you can add to Slack. Once you integrate Claude into your Slack workspace, you're able to directly send it a request or message. You can also include it in a conversation thread as if it were a member of your ream, allowing other people to see the response and also engage with Claude.
Also: The best AI chatbots: ChatGPT and other noteworthy alternatives
Claude can help with creative and collaborative writing, summarizing concepts, and more. It will also remember your entire Slack thread and all messages so you can refer to specific comments and references in a conversation.
Here are some tasks that Anthropic suggests you can ask Claude to tackle:
Get a summary with bullet points and prioritized action items of lengthy Slack threads or long websites.
Turn conversations into structured data inputs for CRM entries, engineering tickets, tables, and more.
Share a website with Claude and ask questions about the content.
Brainstorm ideas with a group, with each participant able to mention Claude and further refine your output.
Like most AI tools, Claude has certain limitations. It may not correctly assess its own ability or memory. It may hallucinate or make up information. It can make mistakes with complicated arithmetic and reasoning and even more basic tasks. Also, it doesn't have general internet access, though it can follow links that you share with it.
Also: Self-messaging on Slack sounds lonely, but it's my best productivity hack
The Claude Slack app is currently in beta mode and free to use. Now, here's how to use Claude in Slack.
To get started, you'll need either your own workspace in Slack or a workspace that you administer and one for which you're able to add apps. You can then add Claude a couple of different ways.
For more details and an FAQ about the app, check out Anthropic's Claude in Slack page. You can also send feedback and bug reports directly to Anthropic at support@anthropic.com.
Spotify may use AI to make host-read podcast ads that sound like real people Sarah Perez @sarahintampa / 9 hours
With Spotify’s AI DJ, the company trained an AI on a real person’s voice — that of its head of Cultural Partnerships and podcast host, Xavier “X” Jernigan. Now, the streamer may turn that same technology to advertising, it seems. According to statements made by The Ringer founder Bill Simmons, the streaming service is developing AI technology that will be able to use a podcast host’s voice to make host-read ads — without the host actually having to read and record the ad copy.
Simmons made the statements on a recent episode of “The Bill Simmons Podcast,” saying, “There is going to be a way to use my voice for the ads. You have to obviously give the approval for the voice, but it opens up, from an advertising standpoint, all these different great possibilities for you.”
He said these ads could open up new opportunities for podcasters because they could geo-target ads — like tickets for a local event in the listener’s city — or even create ads in different languages, with the host’s permission.
His comments were first reported by Semafor.
The Ringer was acquired by Spotify in 2020, but it wasn’t clear if Simmons was authorized to speak about the streamer’s plans in this area, as he began by saying, “I don’t think Spotify is going to get mad at me for this…” before sharing the information.
Reached for comment, Spotify wouldn’t directly confirm or deny the feature’s development.
“We’re always working to enhance the Spotify experience and test new offerings that benefit creators, advertisers and users,” a Spotify spokesperson told TechCrunch. “The AI landscape is evolving quickly and Spotify, which has a long history of innovation, is exploring a wide array of applications, including our hugely popular AI DJ feature. There has been a 500 percent increase in the number of daily podcast episodes discussing AI over the past month including the conversation between Derek Thompson and Bill Simmons. Advertising represents an interesting canvas for future exploration, but we don’t have anything to announce at this time.”
The subtext of this comment indicates Simmons’ statements may have been somewhat premature.
That said, Spotify has already hinted that the AI DJ in the app today would not be the only AI voice users would encounter in the future. When Jernigan was recently asked about Spotify’s plans to work with other voice models going forward, he teased, “stay tuned.”
The streamer has also been quietly investing in AI development and research, with a team of a few hundred now working on areas like personalization and machine learning. Plus, the team has been using the OpenAI model and researching the possibilities across Large Language Models, generative voice, and more.
Spotify’s ability to create AI voices specifically leverages IP from Spotify’s 2022 acquisition of Sonatic combined with OpenAI technology. It may opt to use its own in-house AI tech in the future, the company recently told us.
To create AI DJ, Spotify had Jernigan go into a studio to produce high-quality recordings, including those where he read lines with different cadences and emotions. He kept his natural pauses and breaths in the recordings, and was sure to use language he already says — like “tunes” or “bangers” instead of just “songs.” All this is then fed into the AI model which then creates the AI voice.
The company has explained to detail the process in more detail or say how long it took to turn Jernigan’s recordings into an AI DJ. But, given its possible interest in turning its podcast hosts into AI voice models, it must be developing a fairly efficient process here — and one that could possibly leverage a podcaster’s existing recordings.
While AI voices aren’t new, the ability to make them sound like real people is a more modern development. A few years ago, Google wowed the world with a human-sounding AI in Duplex that could call restaurants for you to make reservations. But the tech was initially slammed for its lack of disclosure. This month, Apple introduced an accessibility feature, Personal Vocie, that is able to mimic the user’s own voice after they first train the model by spending 15 minutes reading randomly chosen prompts, processed locally on their device.
In the domain of Artificial Intelligence (AI) innovation, a notable development has emerged. Meta, formerly known as Facebook, recently introduced an open-source speech recognition AI. This AI tool is remarkable as it significantly advances global communication by its ability to recognize over 4,000 spoken languages.
Open-Source Model: A Catalyst for Global Collaboration
As our world becomes increasingly interconnected due to the rapid pace of globalization, the diversity of languages has persisted as a considerable impediment to seamless communication. Meta's open-source AI holds the potential to revolutionize this dynamic, transforming how we interact on a global scale by democratizing access to information worldwide.
An open-source system such as this allows developers across the globe to build upon the base system, adding new functionalities and improvements. This approach facilitates a shared development platform that promotes collaboration and contributes to an overall advancement in innovation.
An open-source model also fosters a democratized landscape of innovation where tools and technologies are not just confined to a select few corporations. Instead, it allows a broad range of developers, researchers, and organizations to contribute their insights and expertise, spurring the creation of a robust, versatile tool that can serve diverse communities better.
Image: Meta
Promoting Linguistic Diversity and Inclusion
One of the impressive features of Meta's AI system is its comprehensive range of languages. Facilitated by an extensive data set, this AI has been trained on more than 51,000 hours of multilingual and multitask supervised data procured from the web. The AI's capability to learn from this vast pool without requiring language-specific customization or training is a game-changer in bridging communication gaps.
While this development represents a significant stride for Meta, it also offers an opportunity to address the digital divide. Often overlooked in digital innovation, underserved languages could potentially benefit from Meta's initiative. It fosters linguistic diversity on the internet, inviting more voices to participate in the global conversation. This new technology serves not just as a tool, but as a platform to unify users around the globe, making the digital world a more inclusive space.
Navigating Ethical Considerations
However, with every technological advancement comes an accompanying set of ethical considerations. The open-source characteristic of the AI raises concerns regarding potential misuse, necessitating guidelines to ensure responsible use. There's a balance that must be struck between fostering innovation and safeguarding against potential misuse.
Furthermore, issues of data privacy and consent are paramount when accumulating linguistic data on such a large scale. The collection and use of data, particularly in an era where privacy concerns are increasingly prevalent, necessitate clear protocols and transparency from Meta.
Meta's open-source speech recognition AI lays the groundwork for a more inclusive digital future. By breaking down language barriers and democratizing access to information, it ushers in a new era of possibilities. Yet, the ethical implications of such innovation cannot be ignored. As we move forward into this brave new world of AI and communication, we must champion innovation while vigilantly considering its implications and potential challenges. After all, the goal is to ensure that such advancements benefit humanity, bridging gaps rather than creating new ones.