Data tribalism and the AI nuance deficit

Data tribalism and the AI nuance deficit
Mohamed_hasan, Pixabay

If I could name one reason why business will face at least one more AI winter, it’s the lack of nuance in most business AI discussions. The buzz about large language models (LLMs) has sucked much of the oxygen out of the air for complementary technologies. The truth is that LLMs are no more a panacea than “blockchain” was back in the 2010s.

Why is there a lack of nuance in AI discussions? One tribe–the numerical, probabilistic, neural net tribe behind today’s generative AI–dominates the discussion and wins the vast majority of investment. The reason they dominate is the utility of the prompt interface and its ability to emulate natural human language. Today’s GAI delivers plausible results, and at a minimum, gives coders a first draft to work with.

When it comes to AI, most investors are placing one big bet. The bet is that statistical machine learning alone is enough.

Today’s lopsided AI tribalism reflects what’s happening in politics too. Consider the bell curve of current political opinion. The moderate majority in the middle is even more silent than it was in the 1960s. Meanwhile, the extremes in the long tail are far more strident and vocal. They’re the ones getting the lion’s share of attention. As a result, they command a lot more sway per person than the moderate middle does.

Within the context of the extremes dominating the moderate middle, objective journalism has had to take a back seat. In a 31 July MediaBuzz podcast, veteran Fox News media reporter Howard Kurtz observed, “I’m a journalist. I’m not on either side. But people don’t want to accept that anymore.”

The business community does understand it needs general AI, not just narrow AI. And yet, General AI is complex. It is deep, underexplored and multi-faceted. The universe of the different kinds of logic AI must exploit is broad. There is no way one tribe by itself clears a path for general AI.

Instead, a hybrid approach, one that today is already blending the power of neural nets and the symbolic AI of knowledge graphs, is really the only approach that harnesses the potential of smart, connected data and moves us toward meaning through extensible, machine readable context.

The nature of understanding in effective communications

In July 2023, CEO Chris Brockmann of knowledge graph solutions provider Eccenca told a story in a LinkedIn post about a point of disagreement regarding knowledge-based versus numeric-only AI.

“The other party,” Chris said, “seemed to believe that numeric AI could deduce decisions from data through ‘learning.’ I was trying to explain that good decisions as well as true innovations first and foremost require an #understanding of the ‘why’, e.g., the #context as well as a lot of foundational knowledge.”

Chris shared a video from an evangelist named Courage Edoma that shed light on the topic of understanding. In the video, Edoma made these simple but insightful points, which I’m paraphrasing here:

  1. Seek understanding. Find out what you need to do with what you know.
  2. Understanding is knowing something or someone correctly.
  3. Understanding is the ability to interpret something correctly….
  4. Understanding makes a message profitable.
  5. You can’t profit from a message you don’t understand.
  6. You can’t benefit from information without understanding.
  7. Incorrect messages can be dangerous.
  8. Misunderstanding is behind many of society’s problems.
  9. Understanding leads to smooth relationships.
  10. Understanding gives us an upper hand in business.

Blending the deterministic and the probabilistic in today’s AI

Guesswork has its place, but facts and rules are foundational for shared business understanding. Data scientists focused on machine learning don’t always consider that deterministic, fact- and rule-based computation was the norm until the 1990s, when advances in compute, networking and storage finally allowed machine learning to flourish.

The emergence of effective, large-scale machine learning doesn’t imply that more mature, but still evolving methods of computation become obsolete. “Symbolic AI” is a term that’s often misunderstood because it has been narrowly defined merely in terms of 1980s-era decision support systems. Today’s knowledge graphs are also symbolic, and yet much broader in scope and richer in capabilities.

Effective AI governance hinges on a foundation of logically interrelated facts and accompanying rulesets that humans and machines together can enforce. Both the facts and the rules can live with the instance data in a knowledge graph.

Ronald Ross, Co-Founder of Business Rule Solutions and author of the book Rules: Shaping Behavior and Knowledge, points out that “Rules are how you create order from disorder. That’s what laws, statutes, regulations, contracts, agreements, etc. are all about. Rules are also how you remember things and avoid unconscious biases and discrimination (perhaps in machine learning as well).”

Many who are developing knowledge graph capabilities already understand how symbiotic knowledge graphs and large language models can be. Mike Tung, CEO of the automated knowledge graph platform Diffbot, notes that “By guiding the generation of language with the ‘rails’ of a Knowledge Graph, we can create hybrid systems that are provably accurate with trusted data provenance, while leveraging the LLM’s strong abilities in transforming text.”

I bet you think this article is about ChatGPT

AdobeStock_558672396_Editorial_Use_Only

Generative AI has been around for a long time. Some sources say that it appeared as early as the 1950’s. Other sources point to the first rudimentary chatbots that were introduced in the 1960’s. Whatever the true point of origin, we can all agree that those were small pebbles on the historical timeline compared to the huge mountain range of research papers, applications, news articles, blog articles, and conversations occurring in the past year, particularly with generative AI’s appearance both in computer vision models (deep learning for images and videos, including stable diffusion, Midjourney, and DALL·E) and in large language models (deep learning for text and language, including GPT-3, GPT-4, and the preeminent example mentioned in the title of this article).

Generative AI is a field of artificial intelligence (AI) that focuses on the training and deployment of systems capable of generating new and original content, such as creating novel text, images, music, or video from historical training examples of such content types. While this can be applied to structured data (like data tables, time series, and databases), it has proven to be groundbreaking and globally newsworthy when applied to unstructured data (images and text). Unlike traditional AI models that rely on predefined rules and patterns, generative AI models have the ability to produce novel outputs by learning from vast amounts of prior data.
At the core of generative AI are concepts from machine learning (ML) and statistics. (Of course, statistical learning and machine learning are closely related already.)

With regard to the specific aspects of ML that appear in generative AI, a subset of ML called unsupervised learning is used to learn recurring patterns and structures within a given data set. These patterns then become “building blocks with statistical superpowers” (pardon my hyperbole), which can then be combined in logically meaningful and statistically likely groupings to generate new content (often, impressively new content) that closely resembles the training data: text or images. This process is unsupervised learning because it is not aimed at classifying or labeling or reproducing known patterns (supervised learning), but it is aimed at complex pattern discovery in unstructured data (sort of like a general form of independent component analysis ICA, which is similar to but not the same as principal component analysis PCA). ICA is used in signal processing (e.g., in blind source separation or “the cocktail party problem”) – it is a computational method used in identifying and separating a complex signal into a set of additive independent subcomponents.

With regard to the aspects of statistics that appear in generative AI, we encounter many of the key statistical concepts that underlie Markov models and Bayesian learning (hence, the origins of generative AI in the 1950’s). Conditional probabilities, which power those methods, go much further back in history, most prominently of course to the Reverend Thomas Bayes (Bayes Theorem published in 1763). Using conditional probabilities on colossally complex and massively multivariate data, the most likely combination of those building blocks (patterns and structures learned by the unsupervised ML) is calculated by the generative AI in response to the user’s query (i.e., user prompt).

The “secret sauce” in generative AI’s ability to build novel outputs is thus comprised of three basic structures: (1) the storehouse of all possible ingredients (i.e., the ML-learned patterns and structures in the training data); (2) the user’s intent (i.e., personalized request, from a vast menu of choices, provided in the user query, which is the prompt that specifies what the user wants); and (3) the recipe (i.e., the statistical model that computes which combination of ingredients, and in which order, will generate the output that is most statistically likely to satisfy the user query). Please excuse my use of a food metaphor here, but I have my reasons (see below).

To add a bit more color here, the “context” of the query is also fundamentally important, but I intended for that “personalization” component of generative AI to be represented already in the prompt that specifies the user’s intent. Getting the best (most informative, satisfying, and personalized) response is strongly dependent on providing good context within good prompt engineering, which is rising as a new “future of work” workplace skill.

At this point (while writing this), I decided to instantiate my food metaphor with ChatGPT. So, I prompted ChatGPT with this query: “Give me a recipe for a pie that uses regional fruits and spices for a person living in Hawaii.” Here is the response: “Kirk Borne asks ChatGPT for a Hawaiian Pie.” [REF 1] I need to wrap up this blog and go make a pie now.

While all this is exciting, enticing, exhilarating, and explosively transformative, we must also be educational. What I mean is this: before business executives and other leaders get a case of FOMO and say, “give me some of that generative AI right now”, for fear of falling behind their competitors and the rest of the market, there needs to be a foundation for any such deployment to be successful and productive within the enterprise. What are some of the key ingredients in that recipe? [REF 2] Here are three:

  • Data Literacy – the people need to understand data and how data provides business insights and value; what are the types of data that exist in the enterprise; where do these data reside; who is using these data; for what business purposes are the data being used; what are the ethical (governance and legal) requirements on the access and use of these data; and ultimately are these data sufficiently ready to be used in training generative AI (large language and/or vision models)?
  • Data Quality – do we really need to say it? Okay, I will say it: GIGO “garbage in; garbage out!” Dirty data is far more dangerous in black-box ML models, particularly those models that consume massive quantities of data (e.g., deep learning, AI, and generative AI). Model explainability means nothing if the data is dirty, and model trustworthiness is lost.
  • Data/ML Engineering Infrastructure – there is a huge difference between an exploratory ML model running on a data scientist’s laptop versus a deployed, validated, governed, and enterprise-wide model running across the whole business, on which the business is placing a big bet and a ton of dependence. The infrastructure must be AI-ready, and that includes the network, storage, and computational infrastructure. Without this resilient foundation, the ML model running on the CEO’s laptop at a board meeting is probably a better choice than the generative AI “demo demons” showing up at the worst possible time.

So, do you now think that this article was about ChatGPT? I guess it really was. Maybe. [REF 3]

REFERENCES:

  1. “Kirk Borne asks ChatGPT for a Hawaiian Pie” at https://bit.ly/3pvAzMF
  2. Source for meme: https://bit.ly/44lgMhI
  3. The inspiration for the article’s title: https://www.youtube.com/watch?v=mQZmCJUSC6g
image

DSC Weekly 1 August 2023

Announcements

  • With more data at your disposal than ever, data management and analytics have never been more critical to defining long-term success. Join the Managing Hybrid and Multi-Cloud Environments summit to explore how AI and ML are shaping the future of data analytics. You’ll discover strategies to implement deep learning, neural networks, RPA, NLP and more while harnessing dashboards and visualizations to give teams access to valuable, easy-to-digest, real-time data insights.
  • Governance, Risk and Compliance (GRC) programs empower organizations of all industries and sizes to better manage crucial activities within the company – boosting the effectiveness of people, business processes, technology, and other vital business elements. At the upcoming Building Resilience Through GRC Strategies summit, gain valuable insights from experts and industry leaders regarding risk mitigation, compliance requirements, best practices and pitfalls of GRC programs, and more. Register for free and gain access to live webinars, fireside chats and keynote presentations from the world’s leading GRC innovators, vendors and evangelists.

Top Stories

  • The Rise of the Dual Data Scientist / Machine Learning Engineer
    August 1, 2023
    by Vincent Granville
    There are thousands of articles explaining the differences between data scientist and machine learning engineer. Data science gets broken down even further, with data analysts contrasted to researchers. Professionals skilled in all these domains are called unicorns and believed not to exist.
  • Human-centered data networking with interpersonal knowledge graphs
    July 31, 2023
    by Alan Morrison
    “If you start by creating your data, then it’s like you are piling up some value or you’re creating some assets,” WordLift CEO Andrea Volpini told me in our recent FAIR Data Forecast interview. Volpini’s an advocate for adding structured data such as Schema.org to your content.
  • Introduction to “AI & Data Literacy: Empowering Citizens of Data Science”
    July 29, 2023
    by Bill Schmarzo
    One of the reasons that I moved back to Iowa last year was that I saw an opportunity to work with local educational institutions to create an AI Institute for organizations in middle America that either get overlooked in the AI conversation or are unsure what AI means to them.
Education_DSC_160x600-2

In-Depth

  • DSC Webinar Series: Influence Data-Driven Decisions Based On Your Communication Style
    August 1, 2023
    by Ben Cole|
    The post DSC Webinar Series: Influence Data-Driven Decisions Based On Your Communication Style appeared first on Data Science Central.
  • Doctor AI: Healing humans and mother earth hand in hand
    July 31, 2023
    by Rayan Potter
    Let’s image – with algorithms and a nerdy charm that could melt any data center, an ‘AI’ wearing lab coats and stethoscopes patrolling hospital hallways, tirelessly monitoring patients. The digital doctor will take the pulse of Mother Earth and reduce waste, cut energy consumption, and cut energy consumption!
  • Increase efficiency of manufacturing operations with IoT solutions
    July 31, 2023
    by ManojKumar847
    In an age where efficiency is king, manufacturing firms are in a constant race to outshine their competition. Imagine if you could boost productivity, slash downtime, and cut costs all at once. Sounds like a dream, right? The good news is, this isn’t a fantasy. It’s achievable through Internet of Things (IoT) solutions.
  • Understanding license plate recognition with the CCPD computer vision datasets
    July 28, 2023
    by Roger Brown
    In various fields, such as traffic management, law enforcement, and parking management, license plate recognition is a crucial application of computer vision that is used to analyze license plates.
  • DSC Weekly 25 July 2023
    July 25, 2023
    by Scott Thompson
    Read more of the top articles from the Data Science Central community.

Google’s Voice Assistant to Get Generative AI Boost

Google just might have saved its voice assistant. According to a report by Axios, the tech conglomerate is planning to bring generative AI technologies similar to those that power ChatGPT and its own Bard chatbot to its voice assistant and make it ‘supercharged’.

However, Google hasn’t confirmed the development at the time of writing this article.

As part of this move, the company has decided to trim “a small number of roles” from the teams working on its Assistant application. According to the report, ​​the move will involve eliminating dozens of jobs from the NLP team and transferring the responsibility to the generative AI team.

Google hasn’t specified the features it intends to introduce to Assistant, leaving room for some exciting possibilities. “We remain deeply committed to Assistant and we are optimistic about its bright future ahead,” Peeyush Ranjan, the vice president of Google Assistant, and Duke Dukellis, the company’s product director, wrote in an email to the team.

Change of heart

Last year, a report said Google has shifted focus away from its voice assistant to Pixel phones and plans to invest less in developing its Google Assistant voice-assisted search.

According to reports, Amazon also shifted its focus from Alexa. Earlier this year, Amazon employees faced layoffs in the Alexa division as a result of a substantial operating loss in its hardware unit during the previous year. Similarly, Microsoft’s efforts to popularize its Cortana virtual assistant have not yielded the desired success.

However, with the sudden rise of ChatGPT, the desire for a personalised AI assistant is increasing day by day which can perform petty tasks like ordering grocery and food, taking notes, writing a mail and several other tasks that involve human thinking.

Now that we have ChatGPT and it works well, I find myself yearning for a true AI personal assistant more and more.
The future is going to be so cool. 🤯

— Logan.GPT (@OfficialLoganK) July 28, 2023

A voice assistant with its own intelligence can be a game changer. For example, you can ask for food recipes based on the ingredients you have at your home. As most of us regularly order food, groceries, and essential items online, we tend to have our favourite restaurants and shops. If large language models (LLMs) can remember our preferences, it would save a lot of time since we won’t have to scroll through the menu and manually add items to the cart.

We would just be able to give voice commands and our order will be placed. Similarly, if we want to send a mail on the go, we can just ask the voice assistant to draft a mail for on the go. Instead of going through apps and websites manually, these tasks will become automated and much more convenient. Voice assistants will make various tasks seamless and efficient, allowing us to accomplish things with just a few simple spoken instructions.

LLMs are the future of voice assistants

After experiencing ChatGPT and Bard, it is very difficult to go back to voice assistants like Alexa, Siri and Google Assistant. Voice assistants appear insignificant once we get to know the capabilities of LLM chatbots. That’s what happened with Alexa. During its launch in 2015, Alexa experienced a surge of users asking curious and quirky questions, covering everything from the meaning of life to playful wishes. However, as time went by, users became disinterested in Alexa. The devices lacked the capability to provide personalised experiences or generate substantial advertising revenue for the company, causing a gradual decrease in user engagement with Alexa over time.

Earlier this year, Insider reported that Amazon is looking to add ChatGPT-like generative AI features to Alexa. Large language models, using lots of web data, can sound more human in conversations with people than voice assistants like Alexa and Google Assistant, which were built mainly for natural-language understanding.

The post Google’s Voice Assistant to Get Generative AI Boost appeared first on Analytics India Magazine.

Google Assistant is about to get supercharged by generative AI, says new report

Google Nest audio speaker on a table

Voice assistants used to be the pinnacle of artificial intelligence capabilities, with Alexa, Siri, and Google Assistant's abilities marveling many. Today, however, these traditional voice assistants have remained somewhat obsolete.

Google plans on changing that.

Also: 6 helpful ways to use ChatGPT's Custom Instructions

An email attained by Axios reveals that Google has plans to invigorate its Google Assistant with generative AI. Some members of the team have even begun to work on the project.

"We've also seen the profound potential of generative AI to transform people's lives and see a huge opportunity to explore what a supercharged Assistant, powered by the latest LLM technology, would look like," said Google VP Peeyush Ranjan and Director of Product Duke Dukellis in the e-mail.

The Google Assistant would employ technology similar to ChatGPT, which optimizes its assisting abilities, its understanding of natural language prompts, and its range of capabilities.

Also: The best smart speakers

The new project will cause a restructuring in the Assistant team, eliminating a "small number" of roles, according to the email. Other changes include a merging of Google's Services and Surfaces team and some leadership changes.

Amazon is also updating its voice assistant. In late April, Amazon CEO Andy Jassy announced plans to revamp its Alexa voice assistant with a more capable large language model.

Artificial Intelligence

Meta could release AI chatbot as soon as September

PARIS, FRANCE - JUNE 14: The Meta logo is displayed during the Viva Technology conference at Parc des Expositions Porte de Versailles on June 14, 2023 in Paris, France. Viva Technology, the biggest tech show in Europe but also in a unique digital format, for 4 days of reconnection and relaunch thanks to innovation. The event brings together startups, CEOs, investors, tech leaders and all of the digital transformation players who are shaping the future of the Internet. The annual technology conference, also known as VivaTech, was founded in 2016 by Publicis Groupe and Groupe Les Echos and is dedicated to promoting innovation and startups. (Photo by Chesnot/Getty Images)

Meta is preparing a series of AI-powered chatbots capable of displaying various personas, according to a report in the Financial Times. The different personalities reportedly will include a chatbot that imitates Abraham Lincoln and one that impersonates a surfer.

Meta, the parent company behind Facebook, Instagram, and the recently launched Threads platform, reportedly is releasing these chatbots in an attempt to improve user retention. While Threads became the fastest-growing app ever, the social platform is now losing over half its users.

Also: Google Assistant is about to get supercharged by generative AI, says new report

Meta's chatbots would potentially compete with ChatGPT, Bing, and Bard and could be released as soon as September.

Last fall, OpenAI released ChatGPT, a chatbot that triggered a generative AI boom across the tech world. Microsoft and Google released their own AI chatbots over the following months, and many other AI startups have since done the same. Apple is also reported to be working on its own chatbot.

The news of the Meta-backed AI chatbots arrives as Meta strives to maintain user engagement on Threads, a social media app linked to Instagram that launched last month in competition with Twitter.

Meta recently released a commercial version of Llama 2, its proprietary, open-source large language model (LLM), after rumors of an impending widespread launch. Llama 2 is a collection of LLMs in three sizes: 7, 13, and 70 billion parameters; the larger the parameter, the more powerful the model.

Also: Generative AI and the fourth why: Building trust with your customer

An LLM works a lot like the brain of an AI chatbot. It's an AI algorithm that collects massive amounts of data to process through deep learning techniques, and the number of parameters typically categorizes it by size. These parameters are measured by weights and biases, which define the strength of connections between words and definitions and the rules the model has to follow to use the words in context.

With the basis already built, Meta seems poised to launch a successful set of AI chatbots in a matter of months.

Artificial Intelligence

YouTube’s new AI feature helps you decide what to watch next

Person holding phone with YouTube on it

If a new feature from YouTube tests well, AI could help you decide what video to watch next.

In an announcement on its official blog, the company detailed how, starting this week, it is using artificial intelligence to create video summaries that appear on both search and watch pages. The summaries, YouTube says, will make it easier for a user to find out information about a video and decide whether or not it's a good watch.

Also: The best AI chatbots: ChatGPT and other noteworthy alternatives

These summaries, to be available on a "limited number" of English language videos, are only intended to provide brief overviews of a few lines each. The AI-produced summaries will not replace actual descriptions made by the video creators. For now, YouTube hasn't released any details about which videos will be getting these summaries, which users will see them, or what the summaries will look like.

The summary feature is similar to a Chrome extension that debuted in May. This summary option, though, is actually from YouTube's own AI and appears in search alongside a video, not an additional sidebar.

It arrives on the heels of several other recent changes at the video platform. Last month, YouTube announced an AI voice dubbing feature that enables users to hear videos in their native language. (YouTube's parent company, Google, has also announced several AI initiatives over recent weeks.) YouTube also just raised the price of its Premium service, a move that didn't sit well with many subscribers. And earlier this year, YouTube rolled out several new non-AI-related features like improved video quality, more control over the video queue, and smart downloads for offline viewing.

Also: How to download YouTube videos for free, plus two other ways

The biggest hurdle facing the summary feature will be accuracy. As seen by OpenAI recently pulling its own program away from the public, AI can be incredibly useful but also can be incredibly wrong at times.

Should the summary feature become permanent, it will be interesting to see how creators react. Any change in how the website displays or recommends videos has content makers scrambling to figure out the algorithm, and trying to make the AI write the optimal description will surely be at the front of many minds.

Kickstarter requires generative AI projects to disclose additional info

Kickstarter requires generative AI projects to disclose additional info Kyle Wiggers 8 hours

As generative AI enters the mainstream, crowdfunding platform Kickstarter has struggled to formulate a policy that satisfies parties on all sides of the debate.

Most of the generative AI tools used to create art and text today, including Stable Diffusion and ChatGPT, were trained on publicly available images and text from the web. But in many cases, the artists, photographers and writers whose content was scraped for training haven’t been given credit, compensation or a chance to opt out.

The groups behind these AI tools argue that they’re protected by fair use doctrine — at least in the U.S. But content creators don’t necessarily agree, particularly where AI-generated content — or the AI tools themselves — are being monetized.

In an effort to bring clarity, Kickstarter today announced that projects on its platform using AI tools to generate images, text or other outputs (e.g. music, speech or audio) will be required to disclose “relevant details” on their project pages going forward. These details must include information about how the project owner plans to use the AI content in their work as well as which components of their project will be wholly original and which elements will be created using AI tools.

Kickstarter generative AI policy

Image Credits: Kickstarter

In addition, Kickstarter is mandating that new projects involving the development of AI tech, tools and software detail info about the sources of training data the project owner intends to use. The project owner will have to indicate how sources handle processes around consent and credit, Kickstarter says, and implement their own “safeguards” like opt-out or opt-in mechanisms for content creators.

An increasing number of AI vendors offer opt-out mechanisms, but Kickstarter’s training data disclosure rule could prove to be contentious, despite efforts by the European Union and others to codify such practices into law. OpenAI, among others, has declined to reveal the exact source of its more recent systems’ training data for competitive — and possibly legal liability — reasons.

Kickstarter’s new policy will go into effect on August 29. But the platform doesn’t plan to retroactively enforce it for projects submitted prior to that date, Susannah Page-Katz, Kickstarter’s director of trust and safety, said.

“We want to make sure that any project that’s funded through Kickstarter includes human creative input and properly credits and obtains permission for any artist’s work that it references,” Page-Katz wrote in a blog post shared with TechCrunch. “The policy requires creators to be transparent and specific about how they use AI in their projects because when we’re all on the same page about what a project entails, it builds trust and sets the project up for success.”

To enforce the new policy, project submissions on Kickstarter will have to answer a new set of questions, including several that touch on whether their project uses AI tech to generate artwork and the like or if the project’s primary focus is on developing generative AI tech. They’ll also be asked whether they have consent from the owners of the works used to produce — or train, as the case may be — AI-generated portions of their project.

Kickstarter generative AI policy

Image Credits: Kickstarter

Once AI project creators submit their work, it’ll go through Kickstarter’s standard human moderation process. If it’s accepted, any AI components will be labeled as such in a newly added “Use of AI” section on the project page, Page-Katz says.

“Throughout our conversations with creators and backers, what our community wanted most was transparency,” she added, noting that any use of AI that isn’t disclosed properly during the submission process may result in the project’s suspension. “We’re happy to directly answer this call from our community by adding a section to the project page where backers can learn about a project’s use of AI in the creator’s own words.”

Kickstarter first indicated that it was considering a change in policy around generative AI in December, when it said that it would reevaluate whether media owned or created by others in an algorithm’s training data constituted copying or mimicking an artist’s work.

Since then, the platform’s moved in fits and starts toward a new policy.

Toward the end of last year, Kickstarter banned Unstable Diffusion, a group attempting to fund a generative AI art project that doesn’t include safety filters, letting users generate whatever artwork they please, including porn. Kickstarter justified the removal in part by implying that the project exploited particularly communities and put people at risk of harm.

More recently, Kickstarter approved, then removed, a project that used AI to plagiarize an original comic book — highlighting the challenges in moderating AI works.

Foxconn Denies INR 16,000 Cr Deal With Indian State, Tamil Nadu

Foxconn’s track record of announcing projects in India that are either incomplete or undisclosed remains undefeated. Just a day after the government of Indian state Tamil Nadu announced it had inked an agreement with semiconductor giant Foxconn to set up a new electronic components manufacturing facility, the Taiwanese company has denied the claims. The news comes within a few weeks of the chip company also pulling out of an INR 1,54,000 crore semiconductor venture with Indian conglomerate Vedanta.

A Foxconn subsidiary has stated the company had not agreed to the investment of INR 16,00 crore with the state, China’s Securities Times reported on Tuesday. “We did not sign any investment agreement,” Foxconn Industrial Internet (FII) was quoted, adding the company had issued a statement in July refuting similar “rumours.”

According to a reports, FII’s CEO Brand Cheng and other company representatives recently met Tamil Nadu’s Chief Minister MK Stalin and other government officials to discuss the investment in the southern state. In addition, TN’s Industries Minister TRB Raja was also present to discuss investment plans of the iPhone maker. Earlier the CEO had also met Karnataka CM Siddaramaiah.

Noticeably, the Foxy business has not just begun in the Indian subcontinent. Previously, the Taiwanese contract manufacturer has stayed in headlines of the Indian media for similar stories with other states like Karnataka and Andhra Pradesh.

Earlier, this year Foxconn’s foundry plans in a joint venture with Vedanta, which was in an advanced stage of talks for a site in Maharashtra, suddenly shifted to Gujarat just ahead of the state elections. The company’s efforts to penetrate chip making in India seem to be failing as as a result of the company second guessing its decisions.

Read more: Is Foxconn Conning India?

The post Foxconn Denies INR 16,000 Cr Deal With Indian State, Tamil Nadu appeared first on Analytics India Magazine.

Bain & Company Acquires Max Kelsen’s Consulting to Strengthen AI Capabilities

Boston based one of the leading management consulting firms, Bain & Company, recently announced a significant acquisition of Max Kelsen Consulting, comprising both consulting and managed services divisions. This acquisition strengthens Bain & Company’s offerings in the field of artificial intelligence (AI) and machine learning (ML), enabling them to provide more advanced ML and AI solutions to clients worldwide.

Max Kelsen Consulting, based in Australia and established in 2015, has a track record of collaborating with Australian and global companies to develop and deploy ML solutions and boasts a team of skilled full-stack ML engineers who develop ML systems, AI-powered applications, and advisory services for various clients.

The company’s expertise extends to establishing best practice operational machine learning (MLOps) capabilities for clients. It has served diverse clients, including Fortune 500 companies, and has partnered with leading cloud providers like Amazon Web Services and Google Cloud Platform. The healthcare and life sciences sector has been a particular focus of Max Kelsen’s expertise.

The integration of Max Kelsen Consulting into Bain’s Advanced Analytics Group (AAG) will enable the unified team to assist enterprises in creating and implementing high-impact AI and ML use cases.

“This acquisition will strengthen the suite of AI and ML capabilities we offer to our clients regionally and globally,” said Richard Fleming, leader of Bain’s Advanced Analytics Group in Asia Pacific.

Nicholas Therkelsen-Terry, co-founder and CEO of Max Kelsen, expressed excitement about joining Bain. “We are excited to join Bain at a time when businesses are starting to navigate the disruptions brought on by generative AI,” he said.

It is essential to note that Max Kelsen operates additional divisions, such as a products division (SAVI Surgical and PROPeL Health AI) and a research division, which are not part of the Bain acquisition.

The post Bain & Company Acquires Max Kelsen’s Consulting to Strengthen AI Capabilities appeared first on Analytics India Magazine.