Google I/O 2023 is next week; here’s what we’re expecting

Google I/O 2023 is next week; here’s what we’re expecting

A whole bunch of new hardware, coupled with a lot of AI and the best look yet at Android 14

Brian Heater @bheater / 9 hours

Google’s annual developer conference returns to Mountain View’s Shoreline Amphitheater next week, and for the first time in four years, we’ll be returning along with it. The kickoff keynote is always jammed-packed full of information, debuting all of the different software projects the company has been working on for the past year.

The event, which kicks off May 10 at 10 AM PT will be a big showcase for everything that’s on the way for Android 14. The company has, arguably, missed a step when it comes to the current generative AI land rush — hell, who could have predicted after all of these years that Bing would finally have a moment?

CEO Sundar Pichai will no doubt be making the case that the company continues to lead the way in the world of artificial intelligence. There’s always been a fair bit of the stuff at the event largely focused on practical real-world applications like mobile imaging and dealing with customer service. This year, however, I’d say it’s safe to say the company is going to go bonkers with the stuff.

Hardware, meanwhile, is always a bit of a crapshoot at developer conferences. But after an off-year for the industry at large, a deluge of rumors are aligning, pointing to what’s likely to be an unusually consumer electronics-focused keynote. Given the fact that the last bit is my focus at TechCrunch, I’m going to start the list there.

The Pixel 7a is about as sure as bets get. Google has settled into a comfortable release cadence: releasing a flagship in the fall, followed by a budget device in the spring. The former is designed to be an ideal showcase for its latest mobile operating system and first-party silicon, while the latter makes some compromises for price, while maintaining as many of its predecessors as possible.

How to show excitement without shouting? Asking for a friend

Coming to @Flipkart on 11th May. pic.twitter.com/il6GUx3MmR

— Google India (@GoogleIndia) May 2, 2023

It’s a good system that works, and Google’s newly focused mobile hardware team has created some surprisingly good devices at extremely reasonable prices. Never one to be outdone by the deluge of rumors, the company went ahead and announced via Twitter its next device is due out on May 11 — the day after I/O and, perhaps not coincidentally, my birthday. It was Google India that specifically made the announcement — perhaps not surprising, as the company is likely to aggressively target the world’s number one smartphone market with the product. The image points to a very similar design as the 7 — not really a surprise as these things go. Though it does stop short of actually mentioning the name, as it’s done in the past.

Basically expect the 7 with cheaper materials. Rumors point to a 6.1-inch device featuring a 90Hz refresh rate, coupled with a 64-megapixel rear camera. The 7’s Tensor G2 returns for a command performance, likely bringing with it many of the software features it enabled the first time around.

Image Credits: Google

We know for sure that a Pixel Tablet is coming…at some point. Google confirmed the device’s existence at last year’s event, providing a broad 2023 release date, along with a render alongside the rest of the current Pixel lineup. Effectively there are two points this year Google is likely to officially announce the thing: next week or September/October. I would be shocked if the company’s long-awaited (?) reentry into the category doesn’t, at the very least, get a bit of stage time. As a category, the Android tablet has been very hit or miss over the years — presumably/hopefully the company’s got a unique spin here. I would be surprised if Google jumped back into the space without some sort of novel angle.

The leaks point to a design that would effectively turn the system into one giant Nest dock. It’s not entirely original, as Amazon tried something similar with its Fire tablets, but it would certainly buck the iPad model, which is so pervasive in the industry. Other rumors include the aforementioned Tensor G2, coupled with 8GB of RAM.

Here’s your wildcard, folks: the Pixel Fold. Google has seemingly been laying the groundwork for its own foldable for years. Here’s what I wrote a couple of weeks ago:

Some important background here. First, Google announced foldable screen support for Android back in 2018. Obviously, Samsung was both the big partner and recipient in those days, and Google wanted to make Android development as frictionless as possible for other OEMs in exploring the form factor.

The following year, Google foldable patents surfaced. Now, we’re all adults here, who implicitly understand that patents don’t mean a company is working on a product. That said, it’s another key data point in this story. In the intervening years, foldables have begun gathering steam, even outside of the Samsung orbit. I was genuinely amazed by how many different models there were populating the halls of MWC back in March.

The leaked renders point to a form factor that is more Samsung Galaxy Z Fold than Samsung Galaxy Z Flip. It also looks like it shares some common design DNA with Oppo’s recently foldable, which is frankly the right direction. EV Leaks says the foldable is half an inch thick when folded and 0.2 inches unfolded, weight in at 283 grams.

As evidenced by our trip to MWC back in February, foldables are no longer fringe devices. It’s true that they’re still cost-prohibitive for most, but it’s getting to the point soon where nearly ever Android manufacturer will have their take on the category. So why shouldn’t Google?

Other less likely hardware rumors include a Google/Nest AirTag competitor (the company announced yesterday that it’s working with Apple to create a standard for the category), new Pixel Buds and a Pixel Watch 2. I’d say all are unlikely — that last one in particular. We didn’t get much in terms of Nest products last year, but so far not much is forthcoming in terms of rumors for home products.

Google's Android booth at MWC 2023 in Barcelona.

Image Credits: Brian Heater

Android is always a tentpole of I/O for obvious reasons. We’ve already caught some major glimpses of the mobile operating system, by way of beta releases. As Frederic noted in March, “So far, most of the features Google has talked about have also been developer-centric, with only a few user-facing features exposed to far. That also holds true for this second preview, which mostly focuses on added new security and privacy features.”

The operating system, which is apparently named Upside Down Cake internally, is likely set for a summer release in late-July or August. At the top of the list of potential features are a boost to battery life (can always use one of those), additional accessibility features and privacy/security features, which include blocking users from installing ancient apps over malware concerns.

AI is going to be everywhere. Expect generative AI (Bard) in particular to make appearances in virtually every existing piece of Google consumer software, following the lead of Gmail and Docs. Search and the Chrome browser are prime targets here.

A preview of a new Wear OS seems likely. I don’t anticipate a ton of news on the AR/VR side of things, but I would also be surprised if it doesn’t at least get a nod, given what Apple reportedly has in the works for June.

The keynote kicks off at 10 AM PT on May 10. As ever, TechCrunch will be bringing you the news as it breaks.

Read more about Google I/O 2023 on TechCrunch

AMD Revenues Drop, but Company Expects Datacenter Wins Later in 2023

AMD Revenues Drop, but Company Expects Datacenter Wins Later in 2023 May 3, 2023 by Oliver Peckham

Today, AMD reported its financial results for Q1 2023. The headline: revenues ($5.4 billion) are down by 9.2% year-over-year, just barely beating expectations amid turmoil among the economy in general and AMD’s competition. However, the company is also projecting effectively flat revenues of $5.3 billion in Q2 2023 (±$300 million), which elicited disappointment from analysts. Despite the decline in revenue and the less-than-optimistic projections for the coming quarter, AMD CEO Lisa Su sought to project an air of imminent opportunity as needs for acceleration grow. AMD shares fell around five percent in after-hours trading.

AMD’s revenue declines were largely attributable to client revenue, which dropped 64% year-over-year ($2.1 billion in Q1 2022, $739 million in Q1 2023) due to plummeting PC sales. Datacenter revenue remained almost exactly flat – $1.293 billion in Q1 2022, $1.295 billion in Q1 2023 – but these earnings (along with those of the client segment) fell short of expectations.

Su characterized the Q1 earnings as “better-than-expected” given the “mixed-demand environment.” Within the datacenter segment, Su said that “high cloud sales [were] offset by lower enterprise sales.” Su explained that enterprise sales declined because “end customer demand softened due to near-time macroeconomic uncertainty.” On the other hand, Su touted wins with cloud providers and reminded the audience that a variety of server providers have begun production on systems based on 4th-gen “Genoa” CPUs, as well.

Image courtesy of AMD.

In the webcast, Su cited supercomputing wins like the Max Planck Society’s announcement (“the first supercomputer in the EU powered by fourth-gen Epyc CPUs and … MI300 accelerators”) and recent applications of the AMD-powered LUMI system. This latter point also tied in with another (predictable) focus of the earnings call: AI and large language models (LLMs). “Customer interest has increased significantly for our next-generation Instinct MI300 GPUs for both AI training and inference of large-language models,” Su said.

A render of AMD’s MI300 APU. Image courtesy of AMD.

Of course, the biggest supercomputing win for AMD that will (ostensibly) arrive this year is the exascale El Capitan supercomputer at Lawrence Livermore National Laboratory, which will leverage AMD’s hybrid CPU-GPU MI300 “APU.”

“We made excellent progress achieving key MI300 silicon and software readiness milestones in the quarter,” Su said, “and we’re on track to launch MI300 later this year to support the El Capitan exascale supercomputing at Lawrence Livermore National Laboratory and large cloud AI customers.” Later in the call, Su specified an MI300 ramp beginning in the fourth quarter with “supercomputing wins” as well as “early cloud AI wins.” She also hinted that more MI300 specifications would be arriving in the “coming quarters.”

Further: “We are on track to launch Bergamot, our first cloud-native server CPU, and Genoa-X, our fourth-gen Epyc processor with chiplets for leadership and technical computing workloads, later this quarter.”

In light of all this, Su said that while AMD expects server demand to “remain mixed” in the second quarter, they also see the company as “well-positioned” to grow its enterprise footprint in the back half of the year. Along with embedded computing, AMD sees datacenter as one of its core opportunities for growth, “led by accelerating adoption of our AI products.” Su specifically cited an expectation for double-digit datacenter growth from full-year 2022 to full-year 2023.

“We are in the very early stages of the AI computing era, and the rate of adoption and growth is faster than any other technology in recent history,” Su said. “And as the recent interest in generative AI highlights, bringing the benefits of large-language models and other AI capabilities to cloud, edge and endpoints requires significant increases in computer performance. AMD is very well-positioned to capitalize on this increased demand for compute[.]”

It’s hard to take too harsh a view of AMD’s earnings given the state of its competition: Intel announced its worst-ever quarterly loss last week, with Q1 2023 revenues declining 36% relative to Q1 2022’s. While Intel significantly beat its own guidance for that quarter, it remains to be seen whether the aggressive streamlining and pivoting occurring at the chip giant is, indeed, successfully positioning it for growth and paving a path out of its current losses.

Related

GPT-4 cheat sheet: What is GPT-4 & what is it capable of?

Person using a chat AI on their mobile device
Image: LALAKA/Adobe Stock

GPT-4 is an artificial intelligence large language model system that can mimic human-like speech and reasoning. It does so by training on a vast library of existing human communication, from classic works of literature to large swaths of the internet.

Artificial intelligence of this type builds on that training to predict what letter, number or other character is likely to come in sequence. This cheat sheet explores GPT-4 from a high level: how to access GPT-4 for either consumer or business use, who made it and how it works.

Jump to:

  • What is GPT-4?
  • Who owns GPT-4?
  • When was GPT-4 released?
  • How can you access GPT-4?
  • How much does GPT-4 cost to use?
  • Capabilities of GPT-4
  • Limitations of GPT-4 for business
  • GPT-4 vs. GPT-3.5 or ChatGPT
  • Is upgrading to GPT-4 worth it?

What is GPT-4?

GPT-4 is a large multimodal model that can mimic prose, art, video or audio produced by a human. GPT-4 is able to solve written problems or generate original text or images. GPT-4 is the fourth generation of OpenAI’s foundation model.

Who owns GPT-4?

GPT-4 is owned by OpenAI, an independent artificial intelligence company based in San Francisco. OpenAI was founded in 2015; it started out as a nonprofit but has since shifted to a for-profit model. OpenAI has received funding from Elon Musk, Microsoft, Amazon Web Services, Infosys, and other corporate and individual backers.

OpenAI has also produced ChatGPT, a free-to-use chatbot spun out of the previous generation model, GPT-3.5, and DALL-E, an image-generating deep learning model. As the technology improves and grows in its capabilities, OpenAI reveals less and less about how its AI solutions are trained.

When was GPT-4 released?

OpenAI announced its release of GPT-4 on March 14, 2023. It was immediately available for ChatGPT Plus subscribers, while other interested users needed to join a waitlist for access.

SEE: Salesforce looped generative AI into its sales and field service products.

How can you access GPT-4?

The public version of GPT-4 is available at the ChatGPT portal site. OpenAI notes that this access may be slow, as they expect to be “severely capacity constrained.” They plan to release a new subscription level for people who use GPT-4 often and a free GPT-4 access portal with a limited number of allowable queries. No information has been released yet about when these might become available.

How much does GPT-4 cost to use?

For an individual, the ChatGPT Plus subscription costs $20 per month to use.

Enterprise customers wanting to use the GPT-4 API can join the waitlist. Access is limited; as of now, OpenAI has given only one company — the accessibility software group, Be My Eyes — partner access to its visual capabilities.

Pricing for the text-only GPT-4 API starts at $0.03 per 1k prompt tokens (one token is about four characters in English) and $0.06 per 1k completion (output) tokens, OpenAI said. (OpenAI explains more about how tokens are counted here.)

A second option with greater context length – about 50 pages of text – known as gpt-4-32k is also available. This option costs $0.06 per 1K prompt tokens and $0.12 per 1k completion tokens.

Capabilities of GPT-4

Like its predecessor, GPT-3.5, GPT-4’s main claim to fame is its output in response to natural language questions and other prompts. OpenAI says GPT-4 can “follow complex instructions in natural language and solve difficult problems with accuracy.” Specifically, GPT-4 can solve math problems, answer questions, make inferences or tell stories. In addition, GPT-4 can summarize large chunks of content, which could be useful for either consumer reference or business use cases, such as a nurse summarizing the results of their visit to a client.

OpenAI tested GPT-4’s ability to repeat information in a coherent order using several skills assessments, including AP and Olympiad exams and the Uniform Bar Examination. It scored in the 90th percentile on the Bar Exam and the 93rd percentile on the SAT Evidence-Based Reading & Writing exam. GPT-4 earned varying scores on AP exams.

These are not true tests of knowledge; instead, running GPT-4 through standardized tests shows the model’s ability to form correct-sounding answers out of the mass of preexisting writing and art it was trained on. GPT-4 predicts which token is likely to come next in a sequence. (One token may be a section of a string of numbers, letters, spaces or other characters.)

Limitations of GPT-4 for business

Like other AI tools of its ilk, GPT-4 has limitations. For example, GPT-4 does not check if its statements are accurate. Its training on text and images from throughout the internet can make its responses nonsensical or inflammatory. However, OpenAI has digital controls and human trainers to try to keep the output as useful and business-appropriate as possible.

Additionally, GPT-4 tends to create ‘hallucinations,’ which is the artificial intelligence term for inaccuracies. Its words may make sense in sequence since they’re based on probabilities established by what the system was trained on, but they aren’t fact-checked or directly connected to real events. OpenAI is working on reducing the number of falsehoods the model produces.

Another major limitation is the question of whether sensitive corporate information that’s fed into GPT-4 will be used to train the model and expose that data to external parties. Microsoft, which has a resale deal with OpenAI, plans to offer private ChatGPT instances to corporations later in the second quarter of 2023, according to an April report.

Like GPT-3.5, GPT-4 does not incorporate information more recent than September 2021 in its lexicon. One of GPT-4’s competitors, Google Bard, does have up-to-the-minute information because it is trained on the contemporary internet.

GPT-4 vs. GPT-3.5 or ChatGPT

OpenAI’s second most recent model, GPT-3.5, differs from the current generation in a few ways. OpenAI has not revealed the size of the model that GPT-4 was trained on but says it is “more data and more computation” than the billions of parameters ChatGPT was trained on. GPT-4 has also shown more deftness when it comes to writing a wider variety of materials, including fiction.

GPT-4 performs higher than ChatGPT on the standardized tests mentioned above. Answers to prompts given to the chatbot may be more concise and easier to parse.

Additionally, GPT-4 is better than GPT-3.5 at making business decisions, such as scheduling or summarization. GPT-4 is “82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses,” OpenAI said.

SEE: Learn how to use ChatGPT.

Another large difference between the two models is that GPT-4 can handle images. It can serve as a visual aid, describing objects in the real world or determining the most important elements of a website and describing them.

“Over a range of domains — including documents with text and photographs, diagrams or screenshots — GPT-4 exhibits similar capabilities as it does on text-only inputs,” OpenAI wrote in its GPT-4 documentation.

Is upgrading to GPT-4 worth it?

Whether the new capabilities offered through GPT-4 are appropriate for your business is a decision that largely depends upon your use cases and whether you have found success with natural language artificial intelligence. Review the capabilities and limitations listed above, and consider where GPT-4 might be able to save time or reduce costs; conversely, consider which tasks might materially benefit from human knowledge, skill and common sense.

TechRepublic Premium

TechRepublic Premium Exclusives Newsletter

Save time with the latest TechRepublic Premium downloads, including original research, customizable IT policy templates, ready-made lunch-and-learn presentations, IT hiring tools, ROI calculators, and more. Exclusively for you!

Delivered Tuesdays and Thursdays Sign up today

This new AI system can read minds accurately about half the time

Paper craft illustration of brain filled with multi colored geometric shapes. Creative mind

The idea of someone using artificial intelligence to read your thoughts, arguably the only thing in human nature that is our own and inaccessible to anyone else, may make you shudder. Researchers at the University of Texas have published a study about a new system that can read thoughts and translate them into a continuous stream of text.

Also: Why open source is essential to allaying AI fears

The study explains how the authors trained a semantic decoder to make it capable of interpreting a subject's brain activity using functional magnetic resonance imaging (fMRI), while the person listened to or silently imagined stories and watched silent videos, to produce text that directly correlates to what was heard, thought, or watched.

Also: Want a compassionate response from a doctor? Ask ChatGPT instead

The decoder is a non-invasive system that learns from the brain activity measured with an fMRI scanner while the subject listens to hours of podcasts. This teaches the system to process and correlate the input data combined with the scanned brain activity, so it can learn to decode the person's future thoughts.

The individual then either listened to a new story, imagined one, or watched four silent videos, and the decoder was able to generate text corresponding to the person's thoughts, by decoding their brain activity.

Also: Would you listen to AI-run radio? This station tested it out on listeners

AI reading minds isn't as nefarious as it sounds. The ability to decode a person's thoughts could help people communicate more effectively, especially those that are conscious but unable to speak, like people with physical disabilities. This study proves the viability of doing so with language brain-computer interfaces.

How accurate is AI at reading minds?

Examples of a person's thoughts (left) with the semantic decoder's interpretations (right).

This study, groundbreaking as it is, is far from a finished project. Though the semantic decoder generates a continuous stream of text, it does not provide a word-for-word transcript of a person's thoughts. The system in this study, however, is capable of decoding continuous language with complicated ideas, rather than words or simple phrases, for extended periods of time.

It's also not as effective as you may think. The study found the semantic decoder only accurately generated text that matched the meaning behind the person's thoughts about half the time.

Also: Universities that ban ChatGPT may be hurting their own admissions, according to a study

Half the time is still an exceedingly impressive result, considering it is the first successful study of a non-invasive semantic decoder that doesn't require surgical implants.

Can this AI read my mind?

Though the semantic decoder developed by the researchers at UT of Austin is capable of deciphering and reconstructing a person's thoughts to display them in text, it won't work on anyone.

For starters, the system requires an fMRI scanner to capture an individual's brain activity, both during training and testing, though it could be done with other technologies in the future, like functional near-infrared spectrocopy (fNIRS).

Also: AI bots have been acing medical school exams, but should they become your doctor?

The UT researchers also value mental privacy and want this technology used only by individuals that want to use it and that can gain something from it. To diminish the potential for misuse of this technology, they proved in the study that the semantic decoder only works with willing participants that want to cooperate with it.

The system was unable to effectively decode thoughts from individuals that it was not trained on, or that recanted their cooperation after training.

More on AI tools

AI can’t replace human writers

AI can’t replace human writers

As TV writers strike, networks refuse to budge on demands not to use AI

Amanda Silberling 7 hours

In the must-watch final season of “Succession,” Kendall Roy enters a conference room with his siblings. As the scene opens, he takes a seat and declares: “Who will be the successor? Me.”

Of course, that scene didn’t appear on HBO’s hit show, but it’s a good illustration of generative AI’s level of sophistication compared to the real thing. Yet as the Writers Guild of America goes on strike in pursuit of livable working conditions and better streaming residuals, the networks won’t budge on writers’ demands to regulate the use of AI in writers’ rooms.

“Our proposal is that we not be required to adapt something that’s output by AI, and that the output of an AI not be considered writers’ work,” comedy writer Adam Conover told TechCrunch. “That doesn’t entirely exclude that technology from the production process, but it does mean that our working conditions wouldn’t be undermined by AI.”

But the Alliance of Motion Picture and Television Producers (AMPTP) refused to engage with that proposal, instead offering a yearly meeting to discuss “advances in technology.”

“When we first put [the proposal] in, we thought we were covering our bases — you know, some of our members are worried about this, the area is moving quickly, we should get ahead of it,” Conover said. “We didn’t think it’d be a contentious issue because the fact of the matter is, the current state of the text-generation technology is completely incapable of writing any work that could be used in a production.”

The text-generating algorithms behind tools like ChatGPT are not built to entertain us. Instead, they analyze patterns in massive datasets to respond to requests by determining what is most likely the desired output. So, ChatGPT knows that “Succession” is about an aging media magnate’s children fighting for control of his company, but it is unlikely to come up with any dialogue more nuanced than, “Who will be the successor? Me.”

According to Ben Zhao, a University of Chicago professor and faculty lead of art anti-mimicry tool Glaze, AI advancements can be used as an excuse for corporations to devalue human labor.

“It’s to the advantage of the studios and bigger corporations to basically over-claim ChatGPT’s abilities, so they can, in negotiations at least, undermine and minimize the role of human creatives,” Zhao told TechCrunch. “I’m not sure how many people at these larger companies actually believe what they’re saying.”

Conover emphasized that some parts of a writer’s job are less obvious than literal scriptwriting but equally difficult to replicate with AI.

“It’s going and meeting with the set decoration department that says, ‘Hey, we can’t actually build this prop that you’re envisioning, could you do this instead?’ and then you talk to them and go back and rewrite,” he said. “This is a human enterprise that involves working with other people, and that simply cannot be done by an AI.”

Comedian Yedoye Travis sees how AI could be useful in a writers’ room.

“What we do in writers’ rooms is ultimately bouncing ideas around,” he told TechCrunch. “Even if it’s not good per se, an AI can throw together a script in however many minutes, compared to a week for human writers, and then it’s easier to edit than to write.”

But even if there may be some promise for how humans can leverage this technology, he worries that studios see it merely as a way to demand more from writers over a shorter period of time.

“It says to me that they’re only concerned with things being made,” Travis said. “They’re not concerned with people being paid for things being made.”

Writers are also advocating to regulate the use of AI in entertainment because it remains a legal grey area.

“It’s not clear that the work that it outputs is copyrightable, and a movie studio is not going to spend $50 to $100 million shooting a script that they don’t know that they own the copyright to,” Conover said. “So we figured this would be an easy give for [the AMPTP], but they completely stonewalled on it.”

As the Writers Guild of America strikes for the first time since its historic 100-day action in 2007, Conover said he thinks the debate over AI technology is a “red herring.” With generative AI in such a rudimentary stage, writers are more immediately concerned with dismal streaming residuals and understaffed writing teams. Yet studios’ pushback on the union’s AI-related requests only further reinforces the core issue: The people who power Hollywood aren’t being paid their fair share.

“I’m not worried about the technology,” Conover said. “I’m worried about the companies using technology, that is not in fact very good, to undermine our working conditions.”

Glaze protects art from prying AIs

Science fiction publishers are being flooded with AI-generated stories

Four investors explain why AI ethics can’t be an afterthought

Four investors explain why AI ethics can’t be an afterthought Dominic-Madori Davis 9 hours

Billions of dollars are flooding into AI. Yet, AI models are already being affected by prejudice, as evidenced by mortgage discrimination toward Black prospective homeowners.

It’s reasonable to ask what role ethics plays in the building of this technology and, perhaps more importantly, where investors fit in as they rush to fund it.

A founder recently told TechCrunch+ that it’s hard to think about ethics when innovation is so rapid: People build systems, then break them, and then edit. So some onus lies on investors to make sure these new technologies are being built by founders with ethics in mind.

To see whether that’s happening, TechCrunch+ spoke with four active investors in the space about how they think about ethics in AI and how founders can be encouraged to think more about biases and doing the right thing.

We’re widening our lens, looking for more investors to participate in TechCrunch surveys, where we poll top professionals about challenges in their industry.

If you’re an investor and would like to participate in future surveys, fill out this form.

Some investors said they tackle this by doing due diligence on a founder’s ethics to help determine whether they’ll continue to make decisions the firm can support.

“Founder empathy is a huge green flag for us,” said Alexis Alston, principal at Lightship Capital. “Such people understand that while we are looking for market returns, we are also looking for our investments to not cause a negative impact on the globe.”

Other investors think that asking hard questions can help separate the wheat from the chaff. “Any technology brings with it unintended consequences, be it bias, reduced human agency, breaches of privacy or something else,” said Deep Nishar, managing director at General Catalyst. “Our investment process centers around identifying such unintended consequences, discussing them with founding teams and assessing whether safeguards are or will be in place to mitigate them.”

Government policies are also taking aim at AI: The EU has passed machine learning laws, and the U.S. has introduced plans for an AI task force to start looking at the risks associated with AI. That’s in addition to the AI Bill of Rights introduced last year. With many top VC firms injecting money into AI efforts in China, it’s important to ask how global ethics within AI can be enforced across borders as well.

Read on to find out how investors are approaching due diligence, the green flags they look for and their expectations of regulations in AI.

We spoke with:

  • Alexis Alston, principal, Lightship Capital
  • Justyn Hornor, angel investor and serial founder
  • Deep Nishar, managing director, General Catalyst
  • Henri Pierre-Jacques, co-founder and managing partner, Harlem Capital

Alexis Alston, principal, Lightship Capital

When investing in an AI company, how much due diligence do you do on how its AI model purports or handles bias?

For us, it’s important to understand exactly what data the model takes in, where the data comes from and how they’re cleaning it. We do quite a bit of technical diligence with our AI-focused GP to make sure that our models can be trained to mitigate or eliminate bias.

We all remember not being able to have faucets turn on automatically to wash our darker hands, and the times when Google image search “accidentally” equated Black skin with primates. I’ll do everything in my power to make sure we don’t end up with models like that in our portfolio.

How would the U.S. passing machine learning laws similar to the EU’s affect the pace of innovation the country sees in this sector?

Given the lack of technical knowledge and sophistication in our government, I have very little faith in the U.S.’ ability to pass actionable and accurate legislation around machine learning. We have such a long tail when it comes to timely legislation and for technical experts to be a part of task forces to inform our legislators.

I actually don’t see legislation making any major changes in the pace of the development of ML, given how our laws are usually structured. Similarly to the race to the bottom for legislation around designer drugs in the U.S. a decade ago, the legislation never could keep up.

KDnuggets News, May 3: Machine Learning with ChatGPT Cheat Sheet • Data Visualization Best Practices & Resources for Effective Communication

Features

  • Machine Learning with ChatGPT Cheat Sheet by KDnuggets
  • Data Visualization Best Practices & Resources for Effective Communication by Nate Rosidi
  • ChatGLM-6B: A Lightweight, Open-Source ChatGPT Alternative by Bala Priya C

From Our Partners

  • Introducing Healthcare-Specific Large Language Models from John Snow Labs by John Snow Labs

This Week's Posts

  • HuggingGPT: The Secret Weapon to Solve Complex AI Tasks by Kanwal Mehreen
  • Automate Your Codebase with Promptr and GPT by Kanwal Mehreen
  • Fine-Tuning OpenAI Language Models with Noisily Labeled Data by Chris Mauck
  • Schedule & Run ETLs with Jupysql and GitHub actions by Ido Michael
  • Working with Confidence Intervals by Benjamin O. Tayo
  • Open Assistant: Explore the Possibilities of Open and Collaborative Chatbot Development by Abid Ali Awan
  • The Rise of ChatOps/LMOps by Nisha Arya
  • Understanding Central Tendency by Benjamin O. Tayo
  • Bark: The Ultimate Audio Generation Model by Abid Ali Awan
  • Building and Training Your First Neural Network with TensorFlow and Keras by Aryan Garg
  • What is K-Means Clustering and How Does its Algorithm Work? by Clinton Oyogo

KDnuggets News

  • Top Posts April 24-30: AutoGPT: Everything You Need To Know

More On This Topic

  • Data Visualization Best Practices & Resources for Effective Communication
  • Machine Learning with ChatGPT Cheat Sheet
  • KDnuggets News, November 9: 7 Tips To Produce Readable Data Science Code •…
  • Plotly Express for Data Visualization Cheat Sheet
  • ChatGPT for Data Science Cheat Sheet
  • KDnuggets™ News 20:n42, Nov 4: Top Python Libraries for Data Science,…

The 6 biggest AI features to expect from Google I/O 2023

A person on stage at Google's AI event

‪Zoubin Ghahramani‬, VP of Google Research, presenting at a Google AI keynote in NYC.

Google has been a leader in developing advanced artificial intelligence and machine learning models way before the generative AI craze began.

However, the company's generative AI efforts have paled in comparison to those of competitors, such as OpenAI and Microsoft, who are currently dominating the space with ChatGPT and Bing Chat. Even Google's direct answer to those services, Google Bard, has been underwhelming and fallen short of expectations.

Also: 5 products we expect to see at Google I/O (and 3 surprise picks to keep in mind)

To accelerate its growth and hopefully bridge the innovation gap, Google will have to come up with something better than the competition, and what better time to do so than at Google I/O on May 10?

The annual developer conference brings together professionals from across the globe to learn more about the company's latest software and hardware innovations, so it's near-definite that Google will use the stage this time around to announce its hottest, latest developments in AI.

Here's a round-up of what ZDNET expects Google to announce going into event day.

Google

Why open source is essential to allaying AI fears, according to Stability.ai founder

mostaque-crop-for-twitter-larger-canvas-smaller

Twiddling the knobs at Stability.ai's website can be an addictive past-time for an hour or so. Using the DreamStudio software program made by the four-year-old British startup, one can create slick illustrations just by typing a phrase such as, "The highly diverse ZDNET authors seen through the windows of their stellar cruiser on their way to Ceti Alpha V."

Playing with the language — prompt engineering — one can add different scenarios, such as "The authors of ZDNET are an intergalactic force of half-human, half-panda warrior superheroes who wear a giant Z on the front of their costumes."

Also: The best AI art generators to try

Or, one can morph an existing photo, such as the headshot of Stability.ai's founder and CEO, Emad Mostaque, until his features turn to clay or shards of glass, a process akin to Photoshop filters on steroids.

The DreamStudio software, which burst on the scene a year ago, is one of the recent crop of "generative" artificial intelligence programs, similar to OpenAI's ChatGPT.

But Mostaque is establishing himself as the anti-OpenAI. His contention is that programs such as ChatGPT and DreamStudio are so important to the future of humanity, that the world — and especially the business community — will demand to know how the programs work if we are to trust the programs with our sensitive data.

Also: How to use Stable Diffusion AI to create amazing images

"Open models will be essential for private data," said Mostaque during a small meeting of press and executives via Zoom last month. "You need to know everything that's inside it; these models are so powerful."

That's important, he contends, because "a lot of people are realizing that most of the valuable data in the world is private data, regulated data," said Mostaque. "There is no way that you can use black-box models for your health chatbots, or your education, or in financial services, whereas, an open model, with an open-source base, but with licensed variant data, and then the company's private data, is really important."

Also: ChatGPT's success could prompt a damaging swing to secrecy in AI

Mostaque's business plan can be summarized as, "I can be the leader of open even as everyone else does closed."

Image created in Stability.ai's DreamStudio using the prompt, "The highly diverse ZDNET authors seen through the windows of their stellar cruiser on their way to Ceti Alpha V."

By "closed," Mostaque was alluding to the decision in March by OpenAI not to disclose any technical details about its latest generative AI program, the large language model called GPT-4. Some scholars of AI have warned the move could have a chilling effect on research, and that lack of disclosure has enormous moral implications.

Stability.ai is one of a number of parties, both commercial and academic, that have responded to OpenAI's lack of disclosure by creating alternatives. Some are dedicated to openness per se. Others believe open-source software will bring greater efficiency to tame the enormous compute budget that large language models bring.

Also: How to use ChatGPT to write code

Mostaque, a former hedge fund manager, sees a great business opportunity, "a very large arbitrage opportunity," as he puts it, to "minimize the maximum regret" of businesses, in actuarial terms.

The open-source world of engineering and science, he contends, can allay businesses' fears about AI, especially the many publicized issues with ChatGPT and its ilk. That includes — but is not limited to — "hallucinations," when programs give the wrong answer; bias; unethical output; and copyright infringement.

Image created with Stability.ai's DreamStudio with the prompt, "The authors of ZDNET are an intergalactic force of half-human, half-panda warrior superheroes who wear a giant Z on the front of their costumes."

As Mostaque sees the science-business partnerships, open-source software will produce "a benchmark model for every modality, based on open data, from the commons to the commons, and then for every sector, commercially licensed bearings where you know every single thing that's in there," meaning, in the program and its training data.

The term "modality" refers to which media kind of data, such as text, image, sound. Mostaque's vision is all modalities will be enabled by open-source AI programs, not just the natural language kind that are all the rage.

Also: This new technology could blow away GPT-4 and everything like it

Stability.ai's efforts are part of an emerging consensus that many institutions should step into the breach with code where outfits such as OpenAI go dark.

Some groups have simply built upon the earlier releases of OpenAI's GPT, such as an effort unveiled in March by AI hardware maker Cerebras Systems, which released as open source its own trained versions of the GPT programs.

But there is also a kind of collaborative ecosystem brewing.

Also: How to use Midjourney to generate any image you can imagine

Facebook owner Meta's AI group in February released the open-source LLaMA for natural language processing, which subsequently was built upon by researchers at Stanford University to create Alpaca. Then, a joint team from UC Berkeley, Carnegie-Mellon, Stanford, UC San Diego, and the Mohamed bin Zayed University of Artificial Intelligence in Abu Dhabi, built on LLaMA to create yet another program, called Vicuna.

Last week, Mostaque's company released an open-source large language model called Stable Vicuna, based on the Vicuna program. (A vicuña is a South American mammal, a nod to a long tradition of animal names in open-source programs.)

Also: Generative AI is changing your technology career path. What to know

Mostaque has been following this collaborative route for the past few years with various institutions. The technology on which DreamStudio is based, called stable diffusion, is a parallel to OpenAI's GPT. It allows the generation of an image based on strings of words typed by the user.

Stable diffusion was developed by Stability.ai in partnership with researchers at the Computer Vision & Learning research group at the Ludwig Maximilian University of Munich, Germany, which published the original work on "latent diffusion."

The latent diffusion work, as described in last year's paper by Robin Rombach and colleagues at Ludwig Maximilian, sought to slim down the enormous compute budget of image generation, which is one of the most compute-intensive of all AI tasks.

Also: ChatGPT is not innovative or revolutionary, says Meta's chief AI scientist

Stability.ai has also focused on economies of scale. The stable diffusion software, Mostaque points out, is "a hundred thousand gigabytes of images compressed to a two-gigabyte file."

By slimming down the compute budget, the technology of large AI models can be on every smartphone, Mostaque envisions, as a personal helpmate to every individual.

"This is next-generation infrastructure," he said.

Mostaque was an invited speaker for a 90-minute talk hosted by the Collective[i] Forecast, an online, interactive discussion series that is organized by Collective[i], which bills itself as "an AI platform designed to optimize B2B sales."

Also: I used ChatGPT to write the same routine in these ten obscure programming languages

Mostaque started out his career at age 18 programming assembly language routines. "Kids today have it easy: half the code on GitHub is generated by AI," he observed.

Mosque became inspired by artificial intelligence, he said, when his son was diagnosed with autism. "Everyone was, like, there's no cure, no information," he recalled. "We built an AI team, and we built a program to analyze all the literature [on autism], and then a pathway analysis model to evaluate potential causes, in order to identify drugs that could be repurposed for him with medical assistance.

Also: AI has caused a renaissance of tech industry R&D, says Meta's chief AI scientist

"He ended up going to a mainstream school, which I think is pretty cool," said Mostaque.

Now, Mostaque sees extending the benefits of AI to the rest of humanity with compact, efficient AI programs that can be widely distributed.

We are in the right place, ethically," he said, "in terms of bringing this technology to everyone by focusing not on AGI [artificial general intelligence] to replace humans, but how do we augment humans with small, nimble models."

See also

ChatGPT’s popularity with students slices Chegg’s stock nearly in half

Chegg logo on an iPhone

When ChatGPT became popular, many people were concerned that ready access to AI tools could facilitate cheating among students. However, students had reliable resources for cheating long before the dawn of ChatGPT. Take Chegg, for example.

Before ChatGPT became the boogeyman, Chegg was the website that teachers warned students against and feared would enter their classrooms. In fact, many teachers banned students from using Chegg altogether. Sound familiar?

Also: Universities that ban ChatGPT may be hurting their own admissions, according to a study

With a Chegg subscription, students could access answers for their assignments. For some students, the homework help was worth the $14.95-per-month investment.

But why invest in Chegg when you can get homework help for free from ChatGPT? That's the exact question Chegg CEO Dan Rosensweig, investors, and students are asking.

On an earnings call Monday, Rosensweig mentioned that ChatGPT was having a negative impact on Chegg's growth rate, making it harder for the company to get new subscribers.

Also: How to use ChatGPT to write an essay

That statement seems to have spooked investors, with Chegg stocks plunging nearly 50% ahead of the market open Tuesday.

Other education platforms such as Pearson and Duolingo faced similar fates with smaller, yet significant share drops.

Chegg has an AI project of its own in the works. CheggMate will combine the power of GPT-4 with Chegg's content to create personalized learning experiences for users on the platform. Perhaps harnessing the power of ChatGPT will be the push Chegg needs to attract more subscribers to its platform.

Also: How to make ChatGPT provide sources and citations

See also