OpenAI makes ChatGPT ‘more direct, less verbose’

OpenAI makes ChatGPT ‘more direct, less verbose’ Kyle Wiggers 8 hours

ChatGPT, OpenAI’s viral AI-powered chatbot, just got a big upgrade.

OpenAI announced today that premium ChatGPT users — customers paying for ChatGPT Plus, Team or Enterprise — can now leveraged an updated and enhanced version of GPT-4 Turbo, one of the models that powers the conversational ChatGPT experience.

This new model (“gpt-4-turbo-2024-04-09”) brings with it improvements in writing, math, logical reasoning and coding, OpenAI claims, as well as a more up-to-date knowledge base. It was trained on publicly available data up to December 2023, in contrast to the previous edition of GPT-4 Turbo available in ChatGPT, which had an April 2023 cut-off.

“When writing with ChatGPT [with the new GPT-4 Turbo], responses will be more direct, less verbose and use more conversational language,” OpenAI writes in a post on X.

Our new GPT-4 Turbo is now available to paid ChatGPT users. We’ve improved capabilities in writing, math, logical reasoning, and coding.
Source: https://t.co/fjoXDCOnPr pic.twitter.com/I4fg4aDq1T

— OpenAI (@OpenAI) April 12, 2024

The ChatGPT update — which follows the GA launch on Tuesday of new models in OpenAI’s API, notably GPT-4 Turbo with Vision, which adds image understanding capabilities to the normally-text-only GPT-4 Turbo — arrives after an unflattering week for OpenAI.

Reporting from The Intercept revealed that Microsoft pitched OpenAI’s DALL-E text-to-image model as a battlefield tool for the U.S. military. And, according to a piece in The Information, OpenAI recently fired two researchers — including an ally of chief scientist Ilya Sutskever, who was among those who pushed for the ouster of CEO Sam Altman late last year — for allegedly leaking information.

How to Make a QR Code: 7 Ways to Generate QR Codes

QR codes let people access information with a smartphone; instead of typing a URL, you point your smartphone camera at a QR barcode and tap to scan. There are two types of QR codes: One links to one set location on the web (known as a static QR code), and the otherone sends customers to an updatable web location (known as a dynamic QR code).

On an iPhone, Apple’s camera app includes QR code scan support. On an Android device, Google Assistant (with the words “scan QR code”) and the Google Camera app (with Google Lens mode), let you point your camera at a QR code, then tap to scan and search.

We’ll look at various methods that offer reliable ways to create QR codes to provide contactless access to web pages and other information.

How to create QR codes with Chrome

Google’s Chrome browser includes a free QR code generator for web pages (Figure A). The free QR generator feature is built into the Share system in Chrome on both Android and iOS and is available in every desktop version of Chrome.

Create a QR code link to web resources in the Chrome and Chrome OS browser.
Figure A: Create a QR code link to web resources in the Chrome and Chrome OS browser.

Note: Different versions of Chrome provide different QR code displays. Currently, Chrome on Android and desktop versions display the QR code with a dinosaur in the middle, while Chrome on iOS provides a standard QR code that lacks the dinosaur logo.

Android

In Chrome on Android, browse as usual to a web page, then:

  1. Tap the three-dot menu.
  2. Select Share.
  3. Select QR Code. You can tap Download to save the code to your system for later use (Figure B).
On Android, tap the three-dot menu, Share, then QR Code to generate a code for a page.
Figure B: On Android, tap the three-dot menu, Share, then QR Code to generate a code for a page.

iOS

In Chrome on iOS, browse as normal to a web page, then:

  1. Tap the Share symbol.
  2. Scroll down a bit in the displayed options and tap Create a QR Code (Figure C).
  3. Tap Share, then choose either Save Image or Save To Files to preserve the code to scan and use later.
Tap the share glyph, then Create A QR Code within Chrome on iOS to generate a QR code.
Figure C: Tap the share glyph, then Create A QR Code within Chrome on iOS to generate a QR code.

Desktop

In desktop versions of Chrome on Windows, macOS or Chrome OS devices, right-click or tap with two fingers on a touchpad to display the Create QR Code option. The generated QR code displays in the upper-right area the browser. You may then scan the image or select the Download button to save it (FigureD).

Right-click or two-finger tap the touchpad, and select Create QR code for this page (left) to generate a QR code to scan or download (right).
Figure D: Right-click or two-finger tap the touchpad, and select Create QR code for this page (left) to generate a QR code to scan or download (right).

How to create QR codes with QRbot

QRbot lets you create codes that link to a web page, as well as codes that encourage other actions, such as adding a contact, connecting to Wi-Fi, sending an email or SMS, or making a call (Figure E).

QRbot, with web, Android and iOS apps, lets you create QR codes for a variety of links and actions.
Figure E: QRbot, with web, Android and iOS apps, lets you create QR codes for a variety of links and actions.

To create a QR code with QRbot:

  1. Go to the QRbot QR Generator.
  2. Select an action.
  3. Add any necessary details.
  4. Download your custom QR code.

QRbot is available for free on the web and as Android and iOS apps. It also offers a pro option for Android and iOS ($6.99 and $14.99, respectively), which removes ads and provides access to extra features. Additionally, the upgrade on iOS lets you change the QR code design, giving users the ability to adjust colors, add a custom logo and select from more themes.

Visit QRbot

How to create QR codes with Barcode Generator

Those who use Windows may want to consider installing the free Barcode Generator app by Vevy Europe – S.P.A. from the Microsoft Store. As with Chrome and QRbot, the QR Barcode Generator provides several prebuilt action options to create QR codes for email, Twitter, Facebook, SMS, Wi-Fi, Flickr and YouTube among others (Figure F). Select an action, enter your data, then save the generated image.

Barcode Generator lets you select from 12 QR code link types, enter the necessary data and then save the resulting code.
Figure F: Barcode Generator lets you select from 12 QR code link types, enter the necessary data and then save the resulting code.

Visit Barcode Generator

How to create QR codes with iQR codes

On macOS, iQR codes – QR Code Art Studio provides fast, fill-in-the-blank creation of QR codes for phone numbers, SMS, email, locations, web links, contact info, calendar events, Wi-Fi access and more (Figure G).

iQR codes - QR Code Art Studio offers several preset options along with the ability to significantly customize the appearance of the generated QR codes.
Figure G: iQR codes – QR Code Art Studio offers several preset options along with the ability to significantly customize the appearance of the generated QR codes.

This $14.99 app lets you make more adjustments to the display of the QR code than any of the above options. These tweaks include pixel styles, corner and resolution sliders, foreground and background colors, as well as customizations of the corner control points. The app even provides a built-in tool that assesses the readability of your QR code, along with tips to help ensure reliability when your potential customers use their QR code readers.

Visit iQR codes

More options: Create a QR code with Adobe, Canva and ChatGPT Plus

There are many other ways to create a QR code; here are three more.

Adobe

Adobe suggests that QR codes make it easy for people to access a “website, social media accounts, and other online resources.” With an Adobe account, you can use its free Adobe Express QR code generator, which offers a few style, color and file format options (Figure H).

Create a QR code with Adobe Express for free.
Figure H: Create a QR code with Adobe Express for free.

Visit Adobe

Canva

The online design platform Canva provides a QR code generator in its web-based editor. With a free Canva account, you can create a QR code (Figure I) and customize the margin size, background color and foreground colors, then place the graphic as desired in your design.

Generate and position a QR code in your design with Canva for free.
Figure I: Generate and position a QR code in your design with Canva for free.

Visit Canva

ChatGPT

Even AI chatbots are entering the QR code-generation competition. Subscribers to ChatGPT Plus from Open AI, maker of the popular chatbot ChatGPT, can create a QR code with a prompt (Figure J). For example, try:

Create a QR code that links to https://techrepublic.com

The system responds and gives you a link to the QR code images you requested. Modify the above prompt as desired to create a QR code that connects to whatever public link you prefer.

ChatGPT Plus subscribers can prompt for a QR code that links to a URL (left), which the system creates and makes available as an image (right).
Figure J: ChatGPT Plus subscribers can prompt for a QR code that links to a URL (left), which the system creates and makes available as an image (right).

Visit OpenAI

How do you use QR codes?

The use of QR codes, especially for marketing and accessing online menus, proliferated during COVID-19 efforts to minimize physical contact points. Many sales systems, such as OpenTable, Shopify and Square, let business owners generate multiple QR codes for customers.

Do you use QR codes at your organization? If so, what types of information do you link to for your customers: menus, social media marketing, Wi-Fi sign-in)? What systems or apps do you use to create QR codes? Let me know how you use QR codes on X, formerly known as Twitter, at @awolber.

Top 5 AI Trends to Watch in 2024

The AI trend may seem to be following a similar trajectory of hype and adoption as previous enterprise tech trends such as cloud and machine learning, though it’s different in significant ways, including:

  • AI requires massive amounts of compute for the processes that let it digest and recreate unstructured data.
  • AI is changing how some organizations look at organizational structure and careers.
  • AI content that can be mistaken for photographs or original artwork is shaking up the artistic world, and some worry it could be used to influence elections.

Here are our predictions for five trends in AI, which often refers to generative models, to keep an eye on in 2024.

AI adoption increasingly looks like integration with existing applications

Many generative AI use cases coming to market for enterprises and businesses integrate with existing applications as opposed to creating completely new use cases. The most high-profile example of this is the proliferation of copilots, meaning generative AI assistants. Microsoft has installed Copilots next to the 365 suite offerings, and businesses like SoftServe and many others provide copilots for industrial work and maintenance. Google offers a variety of copilots for everything, from video creation to security.

Visit Microsoft 365

But all of these copilots are designed to sift through existing content or create content that sounds more like what a human would write for work.

SEE: Is Google Gemini or ChatGPT better for work? (TechRepublic)

Even IBM asked for a reality check about trendy tech and pointed out that tools like Google’s 2018 Smart Compose are technically “generative” but weren’t considered a change in how we work. A major difference between Smart Compose and contemporary generative AI is that some AI models today are multimodal, meaning they are able to create and interpret pictures, videos and charts.

“We’ll see a lot of innovation about that (multimodality), I would argue, in 2024,” said Arun Chandrasekaran, distinguished VP, analyst at Gartner, in a conversation with TechRepublic.

At NVIDIA GTC 2024, many startups on the show floor ran chatbots on Mistral AI’s large language models since open models can be used to create custom-trained AI with access to company data. Using proprietary training data lets the AI answer questions about specific products, industrial processes or customer services without feeding proprietary company information back into a trained model that might release that data onto the public internet. There are a lot of other open models for text and video, including Meta’s Llama 2, Stability AI’s suite of models, which include Stable LM and Stable Diffusion, and the Falcon family from Abu Dhabi’s Technology Innovation Institute.

“There’s a lot of keen interest in bringing enterprise data to LLMs as a way to ground the models and add context,” said Chandrasekaran.

Customizing open models can be done in a few ways, including prompt engineering, retrieval-augmented generation and fine-tuning.

AI agents

Another way AI might integrate with existing applications more in 2024 is through AI agents, which Chandrasekaran called “a fork” in AI progress.

AI agents automate the tasks of other AI bots, meaning the user doesn’t have to prompt individual models specifically; instead, they can provide one natural language instruction to the agent, which essentially puts its team to work pulling together the different commands needed to carry out the instruction.

Intel Senior Vice President and General Manager of Network and Edge Group Sachin Katti referred to AI agents as well, suggesting at a prebriefing ahead of the Intel Vision conference held April 9–11 that AI delegating work to each other could do the tasks of entire departments.

Retrieval-augmented generation dominates enterprise AI

Retrieval-augmented generation allows an LLM to check its answers against an external source before providing a response. For example, the AI may check its answer against a technical manual and provide the users with footnotes that have links directly to the manual. RAG is intended to increase accuracy and decrease hallucinations.

RAG provides organizations with a way to improve the accuracy of AI models without causing the bill to skyrocket. RAG produces more accurate results compared to the other common ways to add enterprise data to LLMs, prompt engineering and fine-tuning. It is a hot topic in 2024 and is likely to continue to be so later in the year.

Organizations express quiet concerns about sustainability

AI is used to create climate and weather models that predict disastrous events. At the same time, generative AI is energy- and resource-heavy compared to conventional computing.

What does this mean for AI trends? Optimistically, awareness of the energy-hungry processes will encourage companies to make more efficient hardware to run them or to right-size usage. Less optimistically, generative AI workloads may continue to draw massive amounts of electricity and water. Either way, generative AI may become a matter that contributes to national discussions about energy use and the resiliency of the grid. AI regulation now mostly focuses on use cases, but in the future, its energy use may fall under specific regulations as well.

Tech giants address sustainability in their own way, such as Google’s purchase of solar and wind energy in certain regions. For example, NVIDIA touted saving energy in data centers while still running AI by using fewer server racks with more powerful GPUs.

The energy use of AI data centers and chips

The 100,000 AI servers NVIDIA is expected to send to customers this year could produce 5.7 to 8.9 TWh of electricity a year, a fraction of the electricity used in data centers today. This is according to a paper by PhD candidate Alex de Vries published in October 2023. But if NVIDIA alone adds 1.5 million AI servers to the grid by 2027, as the paper speculates, the servers would use 85.4 to 134.0 TWh per year, which is a much more serious impact.

Another study found that creating 1,000 images with Stable Diffusion XL creates about as much carbon dioxide as driving 4.1 miles in an average gas-powered car.

“We find that multi-purpose, generative architectures are orders of magnitude more expensive than task-specific systems for a variety of tasks, even when controlling for the number of model parameters,” wrote the researchers, Alexandra Sasha Luccioni and Yacine Jernite of Hugging Face and Emma Strubell of Carnegie Mellon University.

In the journal Nature, Microsoft AI researcher Kate Crawford noted that training GPT-4 used about 6% of the local district’s water.

The roles of AI specialists shift

Prompt engineering was one of the hottest skill sets in tech in 2023, with people rushing to bring home six-figure salaries for instructing ChatGPT and similar products to produce useful responses. The hype has faded somewhat and, as mentioned above, many enterprises that heavily use generative AI customize their own models. Prompt engineering may become part of software engineers’ regular tasks more going forward, but not as a specialization — simply as one part of the way software engineers perform their usual duties.

Use of AI for software engineering

“The usage of AI within the software engineering domain is one of the fastest growing use cases we see today,” said Chandrasekaran. “I believe prompt engineering will be an important skill across the organization in the sense that any person interacting with AI systems — which is going to be a lot of us in the future — have to know how to guide and steer these models. But of course people in software engineering need to really understand prompt engineering at scale and some of the advanced techniques of prompt engineering.”

Regarding how AI roles are allocated, that will depend a lot on individual organizations. Whether or not most people doing prompt engineering will have prompt engineering as their job title remains to be seen.

Executive titles related to AI

A survey of data and technology executives by MIT’s Sloan Management Review in January 2024 found organizations were sometimes cutting back on chief AI officers. There has been some “confusion about the responsibilities” of hyper-specialized leaders like AI or data officers, and 2024 is likely to normalize around “overarching tech leaders” who create value from data and report to the CEO, regardless of where that data comes from.

SEE: What a head of AI does and why organizations should have one going forward. (TechRepublic)

On the other hand, Chandrasekaran said chief data and analytics officers and chief AI officers are “not prevalent” but have increased in number. Whether or not the two will remain separate roles from CIO or CTO is difficult to predict, but it may depend on what core competencies organizations are looking for and whether CIOs find themselves balancing too many other responsibilities at the same time.

“We are definitely seeing these roles (AI officer and data and analytics officer) show up more and more in our conversations with customers,” said Chandrasekaran.

On March 28, 2024, the U.S. Office of Management and Budget released guidance for the use of AI within federal agencies, which included a mandate for all such agencies to designate a Chief AI Officer.

AI art and glazing against AI art both become more common

As art software and stock photo platforms embrace the gold rush of easy images, artists and regulators look for ways to identify AI content to avoid misinformation and theft.

AI art is becoming more common

Adobe Stock now offers tools to create AI art and marks AI art as such in its catalog of stock images. On March 18, 2024, Shutterstock and NVIDIA announced a 3D image generation tool in early access.

OpenAI recently promoted filmmakers using the photorealistic Sora AI. The demos were criticized by artist advocates, including Fairly Trained AI CEO Ed Newton-Rex, formerly of Stability AI, who called them “Artistwashing: when you solicit positive comments about your generative AI model from a handful of creators, while training on people’s work without permission/payment.”

Two possible responses to AI artwork are likely to develop further over 2024: watermarking and glazing.

Watermarking AI art

The leading standard for watermarking is from the Coalition for Content Provenance and Authenticity, which OpenAI (Figure A) and Meta have worked with to tag images generated by their AI; however, the watermarks, which appear either visually or in metadata, are easy to remove. Some say the watermarks won’t go far enough when it comes to preventing misinformation, particularly around the 2024 U.S. elections.

Figure A

Metadata on an image generated by DALL-E shows the image’s provenance.
Metadata on an image generated by DALL-E shows the image’s provenance.

SEE: The U.S. federal government and leading AI companies agreed to a list of voluntary commitments, including watermarking, last year. (TechRepublic)

Poisoning original art against AI

Artists looking to prevent AI models from training on original art posted online can use Glaze or Nightshade, two data poisoning tools made by the University of Chicago. Data poisoning adjusts artwork just enough to render it unreadable to an AI model. It’s likely that more tools like this will appear going forward as both AI image generation and protection for artists’ original work remain a focus in 2024.

Is AI overhyped?

AI was so popular in 2023 that it was inevitably overhyped going into 2024, but that doesn’t mean it isn’t being put to some practical use. In late 2023, Gartner declared generative AI had reached “the peak of inflated expectations,” a known pinnacle of hype before emerging technologies become practical and normalized. The peak is followed by the “trough of disillusionment” before a rise back up to the “slope of enlightenment” and, eventually, productivity. Arguably, generative AI’s place on the peak or the trough means it is overhyped. However, many other products have gone through the hype cycle before, many eventually reaching the “plateau of productivity” after the initial boom.

Amazon, eyeing up AI, adds Andrew Ng to its board — ex-MTV exec McGrath to step down

Amazon, eyeing up AI, adds Andrew Ng to its board — ex-MTV exec McGrath to step down Ingrid Lunden @ingridlunden / 13 hours

If the decisions made by corporate boards of directors can indicate where a company wants to be focusing, Amazon’s board just made an interesting move. The company announced on Thursday that Andrew Ng, known for building AI at large tech companies, is joining its board of directors. The company also said that Judy McGrath — best known for her work as a long-time TV executive, running MTV and helping Viacom become a media powerhouse — will be stepping down as a director.

Taken together, the two moves sketch out an interesting picture of the tech giant’s intentions.

After many costly years of going all out on building an entertainment empire (Amazon spent almost $19 billion on its video and music business in 2023), it’s interesting to see that McGrath, who would have been an important advocate and adviser on that strategy, is not going to stand for reelection.

However, Amazon will continue to be a huge force in streaming entertainment, be it video, music, gaming or anything else. The company is now folding in advertising across Prime Video, which is one big reason it may want to keep its audience happy and coming back.

Still, it will be interesting to see how investment plays out in that segment in 2024. The company has laid off hundreds of employees in its studio and video divisions, and it has also been winding down Prime Video in some regions, which may indicate that the business could be smaller going forward. And given the AI whiplash that every Big Tech company is currently dealing with, it feels timely that McGrath is stepping away from the board now.

To stay at the forefront of the tech industry, Amazon will be looking for better thought leadership on the next steps in its artificial intelligence strategy.

It’s worth remembering that Amazon has been a leading player in AI for a long time. Its Alexa assistant and Echo devices have done a lot to put voice recognition and connected assistants on the map; the company has been working on autonomous services in airborne and ground-level delivery, and in-store purchasing; it uses machine learning to improve how products are targeted; AWS is a big player in AI compute; and it has of late poured billions into investments in big AI startups.

Yet, for at least a year, in the wake of OpenAI’s GPT advancements, Amazon has grappled with the impression (both internally and externally) that it is “falling behind” on the technology.

Is it true? Is it just optics? Regardless, Ng’s appointment can only be helpful for advancing Amazon’s profile in the realm of AI, since the company stands to gain more thought leadership on real innovation in the space, not just making follow-on moves.

Andrew Ng Landing AIDSC00380

Image Credits: TechCrunch

Ng is potentially a triple-threat board appointment: He has experience in academia, investing and hands-on building, and has usually handled all three roles simultaneously. He is currently an adjunct professor at Stanford; a general partner at a venture studio called AI Fund; he heads edtech company DeepLearning.AI and he is the founder of computer vision startup Landing AI; and he’s chair of Coursera, another edtech startup he founded and used to lead.

Ng has also served as the chief scientist and VP at Chinese search giant Baidu, and he founded and led Google Brain, which was that search giant’s first big foray into building and applying AI tech across its products.

Amazon did not provide any statement from Ng in its announcement. We have reached out to him directly, and we’ll update when and if we hear back.

It may feel like a new wave of companies and thinkers are setting the pace in AI, but the Amazons of the world are certainly not standing by idly.

NVIDIA Introduces Ruler to Measure the Context Length of Models

NVIDIA Introduces Ruler to Measure the Context Length of Models

NVIDIA researchers developed RULER, a synthetic benchmark designed to evaluate long-context language models (LLMs) across various task categories, including retrieval, multi-hop tracing, aggregation, and question answering.

The study involved benchmarking ten long-context models using RULER with context sizes ranging from 4K to 128K. The models were assessed on 13 tasks of varying complexity.

Click here to check out the GitHub repository.

The evaluation revealed that despite achieving nearly perfect results on the needle-in-a-haystack test, all models experienced significant performance drops as the input length increased. The top-performing models, including GPT-4, Command-R, Yi-34B, and Mixtral, demonstrated satisfactory performance at 32K length, but others struggled with larger contexts.

The researchers also examined the impact of training context length, model size, and architecture on performance. Models trained with larger context sizes generally performed better on RULER, although performance rankings varied with longer sequences.

Larger models, such as Yi-34B-200k, outperformed smaller counterparts, demonstrating the benefits of scaling model sizes.

Non-Transformer architectures like RWKV-v5 and Mamba-2.8B-slimpj faced significant degradation when extending context size to 8K and underperformed compared to the Transformer baseline Llama2-7B.

The main results showed that while all models claimed context sizes of 32K tokens or more, none of them maintained performance above the Llama2-7B baseline at their claimed length, except for Mixtral, which performed moderately well at double its claimed context size of 32K.

The models experienced large degradation in performance when tested using RULER as sequence length increased, despite achieving nearly perfect results in the needle-in-a-haystack task. GPT-4 was the best-performing model, exhibiting the highest performance at 4K length and the least degradation when extending the context to 128K.

Additionally, the study found that the top three open-source models—Command-R, Yi-34B, and Mixtral—used a large base frequency in RoPE and had larger parameter sizes. Although the LWM, trained with a context size of 1M, performed worse than Llama2-7B at 4K, it showed smaller degradation with increasing context size, leading to a higher rank than Mistral-7B in weighted evaluations.

RULER’s open-source availability aims to encourage comprehensive evaluation and further research on long-context modeling, highlighting significant room for improvement in this area.

The post NVIDIA Introduces Ruler to Measure the Context Length of Models appeared first on Analytics India Magazine.

The Secret Superstar of LLMs

Cohere is on a roll! Its latest model, Command R+, has beaten GPT-4 on the Arena leaderboard and is now available on HuggingChat.

Unlike OpenAI, Cohere focuses on enterprises rather than catering to consumers with conversational chatbots. “We do not have and never will have a cash-burning consumer chatbot,” said Martin Kon, the COO of Cohere.

Cohere offers several models in three categories: Embed, Command,and Rerank. Each category serves specific use cases and can be tailored to suit particular needs.

Cohere’s latest model, Command R+, will soon be available on Oracle Cloud Infrastructure (OCI) and Microsoft Azure. Notably, this will be the first time a Cohere model is accessible on any cloud platform besides OCI.

Oracle’s Baby

Cohere’s relationship with Oracle is different from that of OpenAI-Microsoft. Gomez said that Cohere is independent of any cloud service provider, allowing it to deploy its model on any cloud, unlike OpenAI, which is limited to Microsoft Azure.

“We think independence is extremely important, so we’re available on every single cloud platform you know—Azure, GCP, OCI, AWS—in addition to on-prem. You’re not getting locked into one stack, one cloud,” said Gomez on the sidelines of the World Economic Forum Davos, 2024.

“We’re not taking massive behemoth cheques from a single cloud provider that might lock us into one ecosystem or environment. We’re really trying to be independent and build something new for the world,” he added, indirectly taking a dig at OpenAI.

Kon echoed similar sentiments, saying that models need to be cloud-agnostic so that you can deploy them wherever you feel comfortable with your data and not be tied to a specific cloud or even on-premises.

Though Gomez said that Cohere is independent, it does share a cosy relationship with Oracle. Cohere trains and builds its generative AI models on Oracle Cloud Infrastructure (OCI), which offers high-performance and low-cost GPU cluster technology. This enables Cohere to accelerate LLM training while reducing costs.

“The relationship with Oracle has been hugely impactful, both on the front of compute and providing the best supercomputers on the face of the planet, but also in terms of going to market together, creating new products together, transforming existing products, and bringing this technology to enterprise,” said Gomez.

“OCI generative AI service really lives up to our mission of building large language models for enterprise in a way that is hyper-protective of their data, completely secure,” added Gomez.

Cohere’s generative AI models are integrated into Oracle’s business applications, including Oracle Fusion Cloud, Oracle NetSuite, and Oracle industry-specific applications

Oracle recently added generative AI capabilities within the Oracle Fusion Cloud Applications Suite, which consists of applications designed to manage various aspects of a company, including finance, human resources, supply chain, sales, marketing, and customer service.

Will Oracle Buy Cohere?

Recently, the OpenAI competitor has been in talks to raise $500 million at a valuation of about $5 billion. Last June, Cohere achieved a valuation of $2.2 billion following a $270 million funding round that involved investors such as Inovia Capital, NVIDIA, and Oracle.

Cohere has been struggling to generate significant revenue. The company generated about $13 million in annualised revenue at the end of last year, generating just over $1 million in monthly revenue. Meanwhile, OpenAI hit the $2 billion revenue mark in December 2023.

Cohere’s revenue is considerably lower than that of its competitors. However, the startup has informed investors that its sales pipeline, comprising potential contracts expected to close by the end of 2024, is valued at over $300 million. It’s not clear how much Oracle’s share in that is.

On the other hand, Oracle recorded a revenue of $13.3 billion, up 7% in Q3 2024. It also signed a big Generation 2 cloud infrastructure contract with NVIDIA.

“Oracle’s Gen2 AI infrastructure business is booming. That’s become pretty clear to everybody,” said Oracle’s chief technology officer, Larry Ellison, during the earnings call.

“In addition to selling infrastructure for training AI large language models, Oracle is also completely reengineering its industry-specific applications to take full advantage of generative artificial intelligence,” he added.

Oracle has developed a Clinical Digital Assistant for doctors, which it will deliver in Q4. It automatically generates doctors’ notes and updates Electronic Health Records.

Ellison also said that Oracle is building the largest data centers in the world. “We’re building an AI data centre in the United States where you could park eight Boeing 747 nose-to-tail in that one data centre. We are building large numbers of data centres,” he said.

“We’re building 20 data centres from Microsoft and Azure. They just ordered three more data centres this quarter… And there are other multi-cloud agreements that are being signed,” he added.

It wouldn’t be surprising if Oracle considered acquiring Cohere in the future if the AI startup struggles to generate significant revenue. However, Cohere’s decision to deploy its model on Microsoft Azure could potentially open up new revenue streams.

The post The Secret Superstar of LLMs appeared first on Analytics India Magazine.

Simbian brings AI to existing security tools

Simbian brings AI to existing security tools Kyle Wiggers 10 hours

Ambuj Kumar is nothing if not ambitious.

An electrical engineer by training, Kumar led hardware design for eight years at Nvidia, helping to develop tech including a widely used high-speed memory controller for GPUs. After leaving Nvidia in 2010, Kumar pivoted to cybersecurity, eventually co-founding Fortanix, a cloud data security platform.

It was while heading up Fortanix that the idea for Kumar’s next venture came to him: an AI-powered tool to automate a company’s cybersecurity workflows, inspired by challenges he observed in the cybersecurity industry.

“Security leaders are stressed,” Kumar told TechCrunch. “CISOs don’t last more than couple of years on average, and security analysts have some of highest churn. And things are getting worse.”

Kumar’s solution, which he co-founded with former Twitter software engineer Alankrit Chona, is Simbian, a cybersecurity platform that effectively controls other cybersecurity platforms as well as security apps and tooling. Leveraging AI, Simbian can automatically orchestrate and operate existing security tools, finding the right configurations for each product by taking into account a company’s priorities and thresholds for security, informed by their business requirements.

With Simbian’s chatbot-like interface, users can type in a cybersecurity goal in natural language, then have Simbian provide personalized recommendations and generate what Kumar describes as “automated actions” to execute the actions (as best it can).

“Security companies have focused on making their own products better, which leads to a very fragmented industry,” Kumar said. “This results in a higher operational burden for organizations.”

To Kumar’s point, polls show that cybersecurity budgets are often wasted on an overabundance of tools. Over half of businesses feel that they’ve misspent around 50% of their budgets and still can’t remediate threats, according to one survey cited by Forbes. A separate study found that organizations now juggle on average 76 different security tools, leading IT teams and leaders to feel overwhelmed.

“Security has been a cat-and-mouse game between attackers and defenders for a long time; the attack surface keeps growing due to IT growth,” Kumar said, adding that there’s “not enough talent to go around.” (One recent survey from Cybersecurity Ventures, a security-focused VC firm, estimates that the shortfall of cyber experts will reach 3.5 million people by 2025.)

In addition to automatically configuring a company’s security tools, the Simbian platform attempts to respond to “security events” by letting customers steer security while taking care of lower-level details. This, Kumar says, can significantly cut down on the number of alerts security analyst must respond to.

But that assumes Simbian’s AI doesn’t make mistakes, a tall order, given that it’s well established that AI is error-prone.

To minimize the potential for off-the-rails behavior, Simbian’s AI was trained using a crowdsourcing approach — a game on its website called “Are you smarter than an LLM?” — that tasked volunteers with trying to “trick” the AI into doing the wrong thing. Kumar explained that Simbian used this learning, along with in-house researchers, to “ensure the AI does the right thing in its use cases.”

This means that Simbian effectively outsourced part of its AI training to unpaid gamers. But, to be fair, it’s unclear how many people actually played the company’s game; Kumar wouldn’t say.

There are privacy implications of a system that controls other systems, especially concerning those that are security-related. Would companies — and vendors, for that matter — be comfortable with sensitive data funneling through a single, AI-controlled centralized portal?

Kumar claims that every attempt has been made to protect against data compromise. Simbian uses encryption — customers control the encryption keys — and customers can delete their data at any time.

“As a customer, you have full control,” he said.

While Simbian isn’t the only platform to attempt to apply a layer of AI over existing security tools — Nexusflow offers a product along a similar vein — it appears to have won investors over. The company recently raised $10 million from investors including Coinbase board member Gokul Rajaram, Cota Capital partner Aditya Singh, Icon Ventures, Firebolt and Rain Capital.

“Cybersecurity is one of the most important problems of our time, and has famously fragmented ecosystem with thousands of vendors,” Rajaram told TechCrunch via email. “Companies have tried to build expertise around specific products and problems. I applaud Simbian’s method of building an integrated platform that would understand and operate all of security. While this is extremely challenging approach from technology perspective, I’ll put my money — and I did put my money — on Simbian. It’s the team with unique experience all the way from hardware to cloud.”

Mountain View-based Simbian, which has 15 employees, plans to put the bulk of the capital it’s raised toward product development. Kumar’s aiming to double the size of the startup’s workforce by the end of the year.

AI-Music Platform Race Accelerates with Udio

Udio Ai music

It’s only been a few days since we spoke highly of the music generation platform Suno AI, which can create a song in any genre or theme of your choice with simple text prompts. Now, here comes Udio!

Introducing Udio, an app for music creation and sharing that allows you to generate amazing music in your favorite styles with intuitive and powerful text-prompting.
1/11 pic.twitter.com/al5uYAsU5k

— udio (@udiomusic) April 10, 2024

With text prompts for genres and styles of one’s desire, Udio can create any track, whether it be EDM, piano jazz, or extreme metal, with ease. Similar to Suno AI, the application provides an user-friendly interface for creating and storing songs. Udio can even generate vocals in multiple styles including Bollywood themed-songs.

The company is backed by some of the top investors including Andreessen Horowitz, Mike Krieger, Oriol Vinyals and music artist Will.I.Am, amongst others. The company raised $10 million in seed funding.

After being in closed beta, Udio made its public presence yesterday with the option for anyone to try the new platform.

Google DeepMind Expertise

The new platform is not only backed by some of the greatest tech investors but also founded by a former big tech researcher. David Ding, the founder and CEO of Udio Music, was a senior researcher with Google DeepMind for over five years. “There is nothing available that comes close to the ease of use, voice quality, and musicality of what we’ve achieved with Udio — it’s a real testament to the folks we have involved,” he said.

The application that was open to all garnered a surge of interest that ultimately led to a waitlist period. Given the current intense interest, it remains uncertain whether this enthusiasm will persist or diminish until the next app arrives.

Source: X

The post AI-Music Platform Race Accelerates with Udio appeared first on Analytics India Magazine.

7 Things Students Are Missing in a Data Science Resume

7 Things Students Are Missing in a Data Science Resume
Image by Author

As I reflect on my days as a student, I now realize that there were a few crucial elements that were missing from my data science resume. These shortcomings probably resulted in my being rejected for various job positions. Not only was I unable to present myself as a valuable asset to potential teams, but I also struggled to showcase my ability to solve data science problems. However, with time, I got better and collaborated with multiple teams to figure out what I was missing and how I could do better if I had to start over.

In this blog, I will share the 7 things that students often overlook in their data science resumes, which can prevent hiring managers from calling them for interviews.

1. Simple and Readable Resume

Complicating your resume with technical terms, too much information, or unconventional formats can lead to it being rejected right away. Your resume should be easy to read and understand, even by someone not deeply versed in data science. Use a clean, professional layout with clear headings, bullet points, and a standard font. Avoid dense blocks of text. Remember, the goal is to communicate your skills and experiences as quickly and effectively as possible to the hiring manager.

2. Quantifiable Achievements

When you are listing your previous work experiences or projects in the experience section, it is recommended to focus on quantifiable achievements rather than simply listing your responsibilities.

For example, instead of stating "Developed machine learning models", you could write "Developed a machine learning model that increased sales by 15%." This will demonstrate the tangible impact of your work and showcase your ability to drive results.

3. Data Science Specific Skills

When creating a list of your technical skills, it's crucial to highlight the ones that are directly relevant to data science. Avoid including skills that are not related to data science, such as graphic designing or video editing. Keep your list of skills concise, and write the number of years of experience you have in each.

Make sure to mention programming languages like Python or R, data visualization tools like Tableau or Power BI, and data analysis tools like SQL or pandas. Additionally, it's worth mentioning your experience with popular machine learning libraries such as PyTorch or scikit-learn.

4. Soft Skills and Teamwork

Data science is not solely dependent on technical abilities. Collaboration and communication skills are crucial. Including experiences where you worked as part of a team, especially in multidisciplinary settings or instances where you communicated complex data insights to non-technical stakeholders, can demonstrate your soft skills.

5. Real World Experience

Employers value practical, hands-on experience in the field of data science. If you have completed internships, projects, or research in data science, be sure to highlight these experiences in your resume. Include details about the projects you worked on, the tools and technologies you used, and the results you achieved.

Students often underestimate the power of showcasing relevant projects. Whether it’s a class assignment, a capstone project, or something you built for fun, include projects that demonstrate your skills in data analysis, programming, machine learning, and problem-solving. Be sure to describe the project goal, your role, the tools and techniques used, and the outcome. Links to GitHub repositories or project websites can also add credibility.

6. Adaptability and Problem Solving Skills

The field of data science is continually evolving, and employers are seeking candidates who can adapt to new challenges and technologies.

As a data scientist, you may find yourself jumping from being a data analyst to a machine learning engineer in just a few months. Your company may even ask you to deploy machine learning models in production and learn how to manage them.

The role of a data scientist is fluid, and you have to be mentally prepared for the role changes. You can demonstrate your adaptability and problem-solving skills by highlighting any experiences in which you had to learn a new tool or technique quickly, or where you successfully tackled a complex problem.

7. Links to a Professional Portfolio

Creating an online portfolio and sharing it on your resume is extremely important. This will enable the hiring managers to quickly have a look at your previous projects and the tools you have used to solve certain data problems. You can check out the top platform for creating a data science portfolio for free: 7 Free Platforms for Building a Strong Data Science Portfolio

Failing to include a link to your GitHub repository or a personal website where you showcase your projects is a missed opportunity.

Final Thoughts

One important thing to keep in mind while submitting your resume for job applications is to modify it according to the job requirements. Look for the skills required for the job and try to include them in your resume to increase your chances of getting an interview call. Apart from your resume, networking, and LinkedIn can be very helpful in finding jobs and freelance projects. Consistently maintaining your LinkedIn profile and posting regularly can go a long way in establishing your professional presence.

Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master's degree in technology management and a bachelor's degree in telecommunication engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.

More On This Topic

  • 5 Portfolio Projects for Final Year Data Science Students
  • Must-haves on Your Data Science Resume
  • 7 Machine Learning Portfolio Projects to Boost the Resume
  • KDnuggets News, September 21: 7 Machine Learning Portfolio Projects…
  • The Optimal Way to Input Missing Data with Pandas fillna()
  • How to Identify Missing Data in Time-Series Datasets

Humane’s Ai Pin considers life beyond the smartphone

Humane’s Ai Pin considers life beyond the smartphone

Co-founders Bethany Bongiorno and Imran Chaudhri discuss the history and future of the $699 generative AI wearable

Brian Heater @bheater / 8 hours

Nothing lasts forever. Nowhere is the truism more apt than in consumer tech. This is a land inhabited by the eternally restless — always on the make for the next big thing. The smartphone has, by all accounts, had a good run. Seventeen years after the iPhone made its public debut, the devices continue to reign. Over the last several years, however, the cracks have begun to show.

The market plateaued, as sales slowed and ultimately contracted. Last year was punctuated by stories citing the worst demand in a decade, leaving an entire industry asking the same simple question: what’s next? If there was an easy answer, a lot more people would currently be a whole lot richer.

Smartwatches have had a moment, though these devices are largely regarded as accessories, augmenting the smartphone experience. As for AR/VR, the best you can really currently say is that — after a glacial start — the jury is still very much out on products like the Meta Quest and Apple Vision Pro.

When it began to tease its existence through short, mysterious videos in the summer of 2022, Humane promised a glimpse of the future. The company promised an approach every bit as human-centered as its name implied. It was, at the very least, well-funded, to the tune of $100 million+ (now $230 million), and featured an AI element.

The company’s first product, the Humane Ai Pin, arrives this week. It suggests a world where being plugged in doesn’t require having one’s eyes glued to a screen in every waking moment. It’s largely — but not wholly — hands-free. A tap to the front touch panel wakes up the system. Then it listens — and learns.

Beyond the smartphone

Image Credits: Darrell Etherington/TechCrunch

Humane couldn’t ask for better timing. While the startup has been operating largely in stealth for the past seven years, its market debut comes as the trough of smartphone excitement intersects with the crest of generative AI hype. The company’s bona fides contributed greatly to pre-launch excitement. Founders Bethany Bongiorno and Imran Chaudhri were previously well-placed at Apple. OpenAI’s Sam Altman, meanwhile, was an early and enthusiastic backer.

Excitement around smart assistants like Siri, Alexa and Google Home began to ebb in the last few years, but generative AI platforms like OpenAI’s ChatGPT and Google’s Gemini have flooded that vacuum. The world is enraptured with plugging a few prompts into a text field and watching as the black box spits out a shiny new image, song or video. It’s novel enough to feel like magic, and consumers are eager to see what role it will play in our daily lives.

That’s the Ai Pin’s promise. It’s a portal to ChatGPT and its ilk from the comfort of our lapels, and it does this with a meticulous attention to hardware design befitting its founders’ origins.

Press coverage around the startup has centered on the story of two Apple executives having grown weary of the company’s direction — or lack thereof. Sure, post-Steve Jobs Apple has had successes in the form of the Apple Watch and AirPods, but while Tim Cook is well equipped to create wealth, he’s never been painted as a generational creative genius like his predecessor.

If the world needs the next smartphone, perhaps it also needs the next Apple to deliver it. It’s a concept Humane’s founders are happy to play into. The story of the company’s founding, after all, originates inside the $2.6 trillion behemoth.

Start spreading the news

Image Credits: Alexander Spatari (opens in a new window) / Getty Images

In late March, TechCrunch paid a visit to Humane’s New York office. The feeling was tangibly different than our trip to the company’s San Francisco headquarters in the waning months of 2023. The earlier event buzzed with the manic energy of an Apple Store. It was controlled and curated, beginning with a small presentation from Bongiorno and Chaudhri, and culminating in various stations staffed by Humane employees designed to give a crash course on the product’s feature set and origins.

Things in Manhattan were markedly subdued by comparison. The celebratory buzz that accompanies product launches has dissipated into something more formal, with employees focused on dotting I’s and crossing T’s in the final push before product launch. The intervening months provided plenty of confirmation that the Ai Pin wasn’t the only game in town.

January saw the Rabbit R1’s CES launch. The startup opted for a handheld take on generative AI devices. The following month, Samsung welcomed customers to “the era of Mobile AI.” The “era of generative AI” would have been more appropriate, as the hardware giant leveraged a Google Gemini partnership aimed at relegating its bygone smart assistant Bixby to a distant memory. Intel similarly laid claim to the “AI PC,” while in March Apple confidently labeled the MacBook Air the “world’s best consumer laptop for AI.”

At the same time, Humane’s news standing stumbled through reports of a small layoff round and small delay in preorder fulfillment. Both can be written off as products of immense difficulties around launching a first-generation hardware product — especially under the intense scrutiny few startups see.

For the second meeting with Bongiorno and Chaudhri, we gathered around a conference table. The first goal was an orientation with the device, ahead of review. I’ve increasingly turned down these sorts of meeting requests post-pandemic, but the Ai Pin represents a novel enough paradigm to justify a sit-down orientation with the device. Humane also sent me home with a 30-minute intro video designed to familiarize users — not the sort of thing most folks require when, say, upgrading a phone.

More interesting to me, however, was the prospect of sitting down with the founders for the sort of wide-ranging interview we weren’t able to do during last year’s San Francisco event. Now that most of the mystery is gone, Chaudhri and Bongiorno were more open about discussing the product — and company — in-depth.

Origin story

Humane co-founders Bethany Bongiorno and Imran Chaudhri.

Humane co-founders Bethany Bongiorno and Imran Chaudhri.

One Infinite Loop is the only place one can reasonably open the Humane origin story. The startup’s founders met on Bongiorno’s first day at Apple in 2008, not long after the launch of the iPhone App Store. Chaudhri had been at the company for 13 years at that point, having joined at the depths of the company’s mid-90s struggles. Jobs would return to the company two years later, following its acquisition of NeXT.

Chaudhri’s 22 years with the company saw him working as director of Design on both the hardware and software sides of projects like the Mac and iPhone. Bongiorno worked as project manager for iOS, macOS and what would eventually become iPadOS. The pair married in 2016 and left Apple the same year.

“We began our new life,” says Bongiorno, “which involves thinking a lot about where the industry was going and what we were passionate about.” The pair started consulting work. However, Bongiorno describes a seemingly mundane encounter that would change their trajectory soon after.

Image Credits: Humane

“We had gone to this dinner, and there was a family sitting next to us,” she says. “There were three kids and a mom and dad, and they were on their phones the entire time. It really started a conversation about the incredible tool we built, but also some of the side effects.”

Bongiorno adds that she arrived home one day in 2017 to see Chaudhri pulling apart electronics. He had also typed out a one-page descriptive vision for the company that would formally be founded as Humane later the same year.

According to Bongiorno, Humane’s first hardware device never strayed too far from Chaudhri’s early mockups. “The vision is the same as what we were pitching in the early days,” she says. That’s down to Ai Pin’s most head-turning feature, a built-in projector that allows one to use the surface of their hand as a kind of makeshift display. It’s a tacit acknowledgement that, for all of the talk about the future of computing, screens are still the best method for accomplishing certain tasks.

Much of the next two years were spent exploring potential technologies and building early prototypes. In 2018, the company began discussing the concept with advisors and friends, before beginning work in earnest the following year.

Staring at the sun

It’s time for change, not more of the same. pic.twitter.com/I6K5FVYzx2

— Humane (@Humane) July 19, 2022

In July 2022, Humane tweeted, “It’s time for change, not more of the same.” The message, which reads as much like a tagline as a mission statement, was accompanied by a minute-long video. It opens in dramatic fashion on a rendering of an eclipse. A choir sings in a bombastic — almost operatic — fashion, as the camera pans down to a crowd. As the moon obscures the sunlight, their faces are illuminated by their phone screens. The message is not subtle.

The crowd opens to reveal a young woman in a tank top. Her head lifts up. She is now staring directly into the eclipse (not advised). There are lyrics now, “If I had everything, I could change anything,” as she pushes forward to the source of the light. She holds her hand to the sky. A green light illuminates her palm in the shape of the eclipse. This last bit is, we’ll soon discover, a reference to the Ai Pin’s projector. The marketing team behind the video is keenly aware that, while it’s something of a secondary feature, it’s the most likely to grab public attention.

As a symbol, the eclipse has become deeply ingrained in the company’s identity. The green eclipse on the woman’s hand is also Humane’s logo. It’s built into the Ai Pin’s design language, as well. A metal version serves as the connection point between the pin and its battery packs.

Image Credits: Brian Heater

The company is so invested in the motif that it held an event on October 14, 2023, to coincide with a solar eclipse. The device comes in three colors: Eclipse, Equinox and Lunar, and it’s almost certainly no coincidence that this current big news push is happening a mere days after another North American solar eclipse.

However, it was on the runway of a Paris fashion show in September that the Ai Pin truly broke cover. The world got its first good look at the product as it was magnetically secured to the lapels of models’ suit jackets. It was a statement, to be sure. Though its founders had left Apple a half-dozen years prior, they were still very much invested in industrial design, creating a product designed to be a fashion accessory (your mileage will vary).

The design had evolved somewhat since conception. For one thing, the top of the device, which houses the sensors and projector, is now angled downward, so the Pin’s vantage point is roughly the same as its wearer. An earlier version with a flatter service would unintentionally angle the pin upward when worn on certain chest types. Nailing down a more universal design required a lot of trial and error with a lot of different people in different shapes and sizes.

“There’s an aspect of this particular hardware design that has to be compassionate to who’s using it,” says Chaudhri. “It’s very different when you have a handheld aspect. It feels more like an instrument or a tool […] But when you start to have a more embodied experience, the design of the device has to be really understanding of who’s wearing it. That’s where the compassion comes from.”

Year of the Rabbit?

Image Credits: rabbit

Then came competition. When it was unveiled at CES on January 9, the Rabbit R1 stole the show.

“The phone is an entertainment device, but if you’re trying to get something done it’s not the highest efficiency machine,” CEO and founder Jesse Lyu noted at the time. “To arrange dinner with a colleague we needed four-five different apps to work together. Large language models are a universal solution for natural language, we want a universal solution for these services — they should just be able to understand you.”

While the R1’s product design is novel in its own right, it’s arguably a more traditional piece of consumer electronics than Ai Pin. It’s handheld and has buttons and a screen. At its heart, however, the functionality is similar. Both are designed to supplement smartphone usage and are built around a core of LLM-trained AI.

The device’s price point also contributed to its initial buzz. At $200, it’s a fraction of the Ai Pin’s $699 starting price. The more familiar form factor also likely comes with a smaller learning curve than Humane’s product.

Asked about the device, Bongiorno makes the case that another competitor only validates the space. “I think it’s exciting that we kind of sparked this new interest in hardware,” she says. “I think it’s awesome. Fellow builders. More of that, please.”

She adds, however, that the excitement wasn’t necessarily there at Humane from the outset. “We talked about it internally at the company. Of course people were nervous. They were like, ‘what does this mean?’ Imran and I got in front of the company and said, ‘guys, if there weren’t people who followed us, that means we’re not doing the right thing. Then something’s wrong.”

Bongiorno further suggests that Rabbit is focused on a different use case, as its product requires focus similar to that of a smartphone — though both Bongiorno and Chaudhri have yet to use the R1.

A day after Rabbit unveiled the product, Humane confirmed that it had laid off 10 employees — amounting to 4% of its workforce. It’s a small fraction of a company with a small headcount, but the timing wasn’t great, a few months ahead of the product’s official launch. The news also found its long-time CTO, Patrick Gates, exciting the C-suite role for an advisory job.

“The honest truth is we’re a company that is constantly going through evolution,” Bongiorno says of the layoffs. “If you think about where we were five years ago, we were in R&D. Now we are a company that’s about to ship to customers, that’s about to have to operate in a different way. Like every growing and evolving company, changes are going to happen. It’s actually really healthy and important to go through that process.”

The following month, the company announced that pins would now be shipping in mid-April. It was a slight delay from the original March ship date, though Chaudhri offers something of a Bill Clinton-style “it depends on what your definition of ‘is’ is” answer. The company, he suggests, defines “shipping” as leaving the factory, rather than the more industry-standard definition of shipping to customers.

“We said we were shipping in March and we are shipping in March,” he says. The devices leave the factory. The rest is on the U.S. government and how long they take when they hold things in place — tariffs and regulations and other stuff.”

Money moves

Image Credits: Brian Heater

No one invests $230 million in a startup out of the goodness of their heart. Sooner or later, backers will be looking for a return. Integral to Humane’s path to positive cashflow is a subscription service that’s required to use the thing. The $699 price tag comes with 90 days free, then after that, you’re on the hook for $24 a month.

That fee brings talk, text and data from T-Mobile, cloud storage and — most critically — access to the Ai Bus, which is foundational to the device’s operation. Humane describes it thusly, “An entirely new AI software framework, the Ai Bus, brings Ai Pin to life and removes the need to download, manage, or launch apps. Instead, it quickly understands what you need, connecting you to the right AI experience or service instantly.”

Investors, of course, love to hear about subscriptions. Hell, even Apple relies on service revenue for growth as hardware sales have slowed.

Bongiorno alludes to internal projections for revenue, but won’t go into specifics for the timeline. She adds that the company has also discussed an eventual path to IPO even at this early stage in the process.

“If we weren’t, that would not be responsible for any company,” she says. “These are things that we care deeply about. Our vision for Humane from the beginning was that we wanted to build a company where we could build a lot of things. This is our first product, and we have a large roadmap that Imran is really passionate about of where we want to go.”

Chaudhri adds that the company “graduated beyond sketches” for those early products. “We’ve got some early photos of things that we’re thinking about, some concept pieces and some stuff that’s a lot more refined than those sketches when it was a one-man team. We are pretty passionate about the AI space and what it actually means to productize AI.”