TikTok creators will need to disclose AI-generated content, or else

TikTok logo on a phone

If you use social media, you have likely been duped by AI, whether it was the viral Pope Francis photo or a realistic-sounding AI-generated music collaboration.

AI-generated content is so realistic that in many cases, it is virtually impossible for you to discern between what is fake and reality. This can have significant negative consequences, such as the spread of misinformation. For that reason, social media platforms have been working on measures to address this problem.

Also: ChatGPT's Custom Instructions feature is now available for everyone

A post on X (formerly Twitter) shared by social media consultant Matt Navarra reveals the option for an AI-generated content label to help users differentiate between actual and AI content.

With the new feature, users can toggle the AI-generated label option when uploading a video that includes AI-generated content. Failure to do so would result in a violation of community guidelines and may result in the removal of the content from the platform.

Also: 5 emerging use cases of generative AI in commerce, according to Mastercard

TikTok has yet to announce the feature publicly, so details on rollout are not available. However, I could not access it on the app leading me to believe that it is being rolled out to select users for testing.

Recently, an X post (formerly tweet) revealed that Instagram is also working on an AI label of its own. Similar to TikTok's, Instagram's label would appear on a post that was AI-generated. Details on who would be responsible for activating it and rolling it out are also yet to be disclosed.

Artificial Intelligence

IBM Plans to Make Llama 2 Available within watsonx.ai Platform

A sign with the Watson and IBM logos.
Image: MichaelVi/Adobe Stock

The race for territory in the generative AI-as-a-service world continues as IBM partners with Meta’s open-source large language model, Llama 2. Watsonx is a generative AI foundation model platform, while watsonx.ai is the studio for building and fine-tuning foundation models, including generative AI and machine learning applications.

“IBM believes a platform is only as valuable as the ecosystem it enables,” said Tarun Chopra, vice president of product management for IBM Data and AI, in an email to TechRepublic. “All of the world’s enterprises and their customers — not just a select few — should benefit from foundation models and generative AI.”

Jump to:

  • What does Llama 2 add to watsonx.ai?
  • What is open source when it comes to generative AI?
  • How business leaders should decide on a generative AI platform
  • What are Llama 2 and watsonx.ai’s competitors?

What does Llama 2 add to watsonx.ai?

Users of IBM’s Watsonx data enterprise platform now have access to Meta’s Llama 2, IBM announced on July 18. Depending on the version, Llama 2 can be up to a 70-billion parameter model. It was trained on 2 trillion tokens of data from publicly available sources.

Including Llama 2 in Watsonx is part of a business strategy of portraying and providing “open innovation that’s guarded by embedded governance and trustworthy principles,” Chopra said.

Llama 2 is now available within the watsonx.ai studio in early access for select clients and partners. IBM has not yet revealed a date for a full release.

The watsonx.ai prompt lab has a guardrail function users can turn on or off to remove potentially harmful content from both the input and output text.

SEE: Hiring Kit: Prompt Engineer (TechRepublic Premium)

What is open source when it comes to generative AI?

There has been some opposition in the developer community over the use of the term open source in the context of Meta and Llama 2. The Open Source Initiative, one of the standards bodies behind open-source software, is still in the process of defining what open source means when referring to generative AI.

“The value in an AI tool isn’t simply in the model or algorithm,” said Peter Zaitsev, founder of open-source database software company Percona, in an email to TechRepublic. “An equally important element is the data or AI weights (a.k.a. neural net weights) used to train the model for a specific use or application. This aspect is inherently tricky for applying open source principles.”

Until the OSI develops a standard, companies like Meta “are misusing the ‘open’ title for their own benefit,” Zaitsev said.

How business leaders should decide on a generative AI platform

“Business leaders should seek platforms and models that can safely tap their organizations’ unique data sets and proprietary domain knowledge,” Chopra said.

He also offered the following questions to ask when choosing which generative AI platform to use:

  • Do you have direct access to the tools, technology, infrastructure and expertise that will empower you to either build your own models or adapt third-party models and deploy them at scale?
  • Will you be able to leverage and retain your own data?
  • Do you feel confident you can adequately manage and govern your models throughout their life cycles?

SEE: Generative AI may change the way security professionals see the cloud. (TechRepublic)

What are Llama 2 and watsonx.ai’s competitors?

Llama 2 competes primarily with GPT-4 and Anthropic’s Claude 2. Other players in the AI platform space include Amazon SageMaker Studio, Google Vertex AI and Microsoft Azure AI.

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more.

Delivered Tuesdays and Fridays Sign up today

How to use AI to create a logo for free

ai-logo

An AI-generated logo for one of the most ridiculous business ventures ever conceived.

Creating a logo with artificial intelligence isn't easy, but the hardest part is choosing the right tools. If you want to use generative AI tools to make a logo, you can choose to go with a specialized tool, which typically entails paying for it, or wing it with one of the free tools available — though this will require some work on your end.

Also: We're not ready for the impact of generative AI on elections

Depending on your design needs and skills, there are ways to use generative AI to create a logo without paying a fee. This is even easier if you're familiar with Photoshop or other photo editing software to make necessary tweaks.

How to use AI to create a logo for free

What you'll need: Access to one of the free AI image generators available and access to image editing software.

I'll be using the Bing Image Creator.

Here are the results from Bing Image Creator. You can see the text doesn't make sense, so we'll tweak the image we choose.

Editing the logo in Photoshop, I only added the 'Wrenshakes' name and banner.

FAQs

Is there AI that can create a logo?

AI logo generators exist, though there are drawbacks. Some require payment, and other free ones create very simplistic logotypes. The benefit is that you don't have to work on them after they're created, as the tools add your company's name and slogan to the logotype.

What are some good AI logo generators to use?

  • Namecheap: This free AI logo generator lets you edit and download your logos without payment just by signing up for an account.
  • Hatchful: This free AI logo maker from Shopify — my favorite tool — generates more cohesive logos that look less generic than the alternatives. It also asks users to select their brand styles, from creative and bold to classic and reliable, as part of the creation process.
  • Brandmark: This is a paid logo generator tool with a one-time cost that ranges from $25 to $175. Its logos are simple yet attractive, but it offers only limited graphic options.
  • Looka: Allows you to customize the logo and gives you many layout options. But it requires payment, and you have limited graphic options. The subscription is available at a one-time cost of $20 or $65 packages, or as part of a subscription for brand kits including multiple files and other formats for $96 and $192 billed annually.
  • Logo AI: The logos created with Logo AI seem a little generic overall, but there were some interesting options. It offers three paid packages ranging from $29 to $99.
  • Designs.AI: This site walks you through their questionnaire to narrow your logo preferences. It's also pretty limited in graphics, and plans cost between $29 and $69 monthly.

These AI logo generator tools are easy to use and have users fill out a questionnaire with the company's name, slogan, category and keywords, color palette, and preferred fonts.

Also: The best AI art generators: DALL-E 2 and fun alternatives to try

Why can't some AI art generators add text accurately to logos and images?

The free generative AI tools aren't the best at creating logotypes because they can't generate the letters accurately. Large Language Models (LLM) are trained on data from the internet and other digitized sources. They generate new content based on their knowledge, but the smaller details don't always translate accurately.

This is why AI-generated faces look creepy, hands in AI-generated images have more or fewer fingers than the norm, and why words you want to be included don't look right.

Also: Generative AI: Just don't call it an 'artist' say scholars in Science magazine

What is the free AI tool to generate images?

AI image generators are widely available, with many reliable options for free. If you want to use an AI image generator for free, check out the Bing Image Creator, DALL-E 2, MidJourney, or Craiyon.

Disclaimer: Using AI-generated images could lead to copyright violations, so people should be cautious if they're using the images for commercial purposes.

Artificial Intelligence

Newegg adds ChatGPT-powered feature to save you from review analysis paralysis

Newegg AI reviews

If you're the type of person who pores endlessly over online reviews when making a purchase, a new feature on Newegg could help make that process a little easier.

Newegg announced this week that it is now using ChatGPT to offer up summaries of customer reviews into brief snippets or phrases called "Review Bytes," plus a longer AI-generated paragraph summary. Instead of customers relying on the composite review score, they can now see exactly what key terms other customers seem to say often.

Also: Is Temu legit? What to know before you place an order

Newegg says that shopping for a graphics card, for example, might bring up quick "Review Bytes" that mention "no coil whine," "decent temps," and a "zero RPM fan mode." The AI paragraph summary below would go into a little more detail, even pointing out some problem areas that other customers have had.

When I tested the feature on a popular gaming laptop, "Review Bytes" mentioned "fast performance" and "no bloatware" but pointed out a "screen issue" some customers encountered.

At present, the feature is only available on the desktop version of Newegg's site for products that meet a threshold of minimum reviews.

Andrew Choi, Newegg's director of brand and website experience, admits that "the process of analyzing relevant information to make a purchase decision can be arduous." Viewing real-life customer feedback more efficiently should remove some of the trouble, he said.

Also: Every product we're expecting at Apple's September event

To be transparent about the source of the summary, Newegg does clearly label it as "SummaryAI." Original reviews are just located below in their usual spot.

Newegg's AI review summary feature is just the latest in a trend of shopping websites utilizing artificial intelligence. Earlier this year, Mercari rolled out a virtual AI shopping assistant, Wayfair created a tool that uses AI to let customers redecorate their living room, and Bing introduced AI price match monitors.

Artificial Intelligence

Generative AI Takes Center Stage at the 2023 Ai4 Conference

Over the years, the landscape of artificial intelligence (AI) has undergone dramatic evolution. This continuous progression was showcased at the recent Ai4 Conference, held from August 7th to 9th, 2023, in Las Vegas, NV, where Generative AI took center stage.

A Glimpse Back at Ai4 2022

Last year's Ai4 conference was a celebration of the legacy techniques that laid the foundation for current AI innovations. Sessions and discussions were heavily dominated by established methods like deep learning, which has been at the forefront of image and speech recognition advancements. Reinforcement learning was another key player, with its ability to train agents to make a series of decisions by rewarding them for correct actions. Additionally, niche applications such as federated learning made notable appearances, emphasizing the importance of privacy and decentralized machine learning.

The Emergence of Generative AI in 2023

Fast-forward a year, and the scene had dramatically changed. It did not seem to matter when session you attended at this year's event, Generative AI was the undisputed star of the show. This includes generating images, text, music, video, and more. With its potential to create content at a scale previously deemed impossible, its prominence at the 2023 Ai4 AI Conference was hardly surprising.

What caught many by surprise was not just the presence of Generative AI, but the extent to which it overshadowed legacy methods. This isn’t to suggest that foundational techniques have become obsolete; they remain integral to many AI applications. However, the shift in focus underscores the industry's tendency to be forward-looking, always seeking the next breakthrough.

While the positive applications of Generative AI were highlighted, concerns about the potential negative implications of this technology were also present.

Concerns with Generative AI?

Intellectual Property

One panel focused exclusively on Intellectual Property concerns, the potential of these models to reproduce, rephrase, or even “create” new content brings up significant concerns about plagiarism and intellectual property rights. How do we as a society attribute originality in a world where machines can generate content on demand?

The two questions that were raised are:

  1. If art is generated by AI can it be copyrighted, and more importantly should it be copyrighted?
  2. Are AIs that vacuum up the world's information and art infringing on the copyright material of artists?

The two questions that were asked did not yet have a solid answer, as a society we are still waiting for clarification on the above questions.

Security Threats

Some of the sessions discussed the potential misuses of AI when it comes to adversarial attacks, or generating text, or voices, with the intention of deceiving the general public.

For example Generative AI's capability to produce human-like text and imitate personal styles can be misused for phishing attacks. A sophisticated AI could craft emails or messages tailored to individuals, making scams more convincing than ever.

While deepfakes commonly refer to video or audio, Generative AI can also craft fabricated textual content. Unscrupulous individuals could forge statements, interviews, or writings, wrongly attributing them to real individuals, potentially causing reputational harm. For example, how do banks verify a users voice over the phone if a voice can be cloned and words can be autogenerated on the fly?

A Standout StartUp

While most of the presentations were by legacy companies, there was one standout startup that should be on everyone's radar.

Rain Neuromorphics is working on building artificial brains by training End-to-End Analog Neural Networks with Equilibrium Propagation.

Rain has an ambitious roadmap that will ultimately enable 100 billion-parameter models in a chip the size of a thumbnail.

Rain is a Y Combinator alumni company that includes Sam Altman as an investor. In February, 2022 they announced a successful $25 Series A raise.

Future of AI

Notably absent were much discussions surrounding Artificial General Intelligence (AGI), or other ways to take AI to the next level. It will be interesting to see if this becomes more of a discussion at future Ai4 events once the intense obsession with Generative AI subsides.

Anthropic releases a faster, smarter, cheaper AI model

Anthropic on a smartphone

Since OpenAI released ChatGPT, many companies have attempted to create their own AI models, but only some have been able to stand out. Anthropic is one of them.

The AI startup released its own AI model, Claude, in March. It has proven to be a worthy rival to OpenAI's GPT-3.5 and GPT-4. With that initial launch, Anthropic also released Claude Instant, a lighter, less expensive, and faster version of Claude, according to Anthropic. Now, it's getting an upgrade.

Also: How AI helped get my music on all the major streaming services

On Wednesday, Anthropic released Claude Instant 1.2, an improved version of the model that leverages Claude 2.0, the latest version of Claude that was released in July.

Because it uses Claude 2.0's advanced abilities, Claude Instant 1.2 has significantly improved in math, coding, reasoning, and safety and generates longer, more structured responses, according to the release.

To put the model to the test, Anthropic compared Claude Instant 1.1 and 1.2's performance in standard benchmark evaluations, including the Codex evaluation and Grade-school math problems benchmark (GSM8k), which are good benchmarks for math and coding abilities.

Also: TikTok creators will need to disclose AI-generated content, or else

In both instances, 1.2 outperformed 1.1 with a score of 58.7% versus the original's 52.8% in the Codex evaluation and 86.7% versus the original's 80.9% in GSM8k.

For the rest of the benchmark exams, the newer model performed either slightly below or above the older model, with minimal differences.

The quality of answer output also improved with decreased hallucinations and increased resistance to jailbreaking attempts. A red-teaming evaluation found that Claude 1.2 is the safest model to use.

Businesses can get access to the new model by filling out an interest form and developers can use the API, which is much less expensive than Claude 2.

Artificial Intelligence

How AI helped get my music on all the major streaming services

gewirtz-spotify

My music is now on Spotify, Apple Music, Amazon Music, and all the rest.

Some time ago, during a particularly dark time in my life, I found some solace in composing music that reflected my various moods. After that, the tracks sat, all 12 of them, waiting for — I don't know — something.

Many musicians have similar experiences. They create music, write songs, and compose tunes, but never really get them "out there."

Back in the day, the only ways you could share your music were through live performances, a self-published CD shared with all ten of your friends, or getting signed by a record label (a very high barrier of entry rife with considerable difficulty and risk).

Also: How I used ChatGPT and AI art tools to launch my Etsy business fast

But all that has changed. You may not realize it, but Americans still buy CDs and vinyl (about half "like to purchase physical copies of their music"). That said, music streaming services have become popular and even more mainstream than physical album sales.

Launching a new music album or EP no longer requires catching the eye of an A&R (artists and repertoire) executive at a record label. You can use a digital music distributor to push your music out onto streaming services without having to be considered marketable enough by some record executive, which really helps if your music doesn't fit neatly into a popular genre.

This really is a golden age for indy music, because music releases are no longer solely dependent on the approval of mainstream sound gatekeepers. Anyone can launch an album and get it onto Spotify, Apple Music, Amazon Music, iTunes, Deezer, Tidal, and the rest.

So I did. And I used AI to help me make it happen.

TL;DR

AI was not used for:

  • Composing, mixing, or mastering the actual music
  • The website setup and design
  • Doing all the various deals to establish distribution

AI was used for:

  • Album cover art
  • Music genre identification
  • Music promotional prose (descriptions of the music)
  • Artist music bio
  • Artist profile image for Spotify, Apple Music, etc, as well as the hero image for the website
  • Extended banner on the website
  • Artist image using a computer keyboard
  • Artist image in front of my muscle car

No AI was used in the creation or mixing of this music

Let me be clear. While I did use my computer to mix licensed sound samples and compose my own work, I did not use an AI to create or mix my music. My music is all me. That's important because I want my listeners (dare I say, fans) to know that what's on those tracks represents my skills, feelings, heart, and soul.

Also: How does ChatGPT actually work?

I did dabble with, and then discard, the idea of using an AI to help master the music. Mastering is the process where an audio engineer takes your mix and zhuzhes it up, making everything match, sound right, get the best balance, and sometimes even pull more from the music than the original composer was able to mix in originally.

A good sound engineer is worth their weight in gold. So, of course, some companies have tried to create AIs to reduce the process to a few clicks.

To help me choose which mastering service to try, I watched these folks compare a number of automated mastering tools. They determined that LANDR was the best of the AI tools, so that's the one I tried.

Also: How to improve the quality of Spotify streaming audio

I'm certainly not a sound engineer, and while anyone's music can probably benefit from the wisdom and skills of a deeply experienced human sound engineer, I write about AI. So I plunked down $40 for LANDR's premium mastering plan and gave it a shot.

While the UI for LANDR was fine, I found the results to be maddening. Did it sound better? Did it sound worse? Did I have a good enough ear to tell the difference?

LANDR lets you choose some options for how it will process your master, so you're essentially guiding the AI through something you don't otherwise know all that well.

Then, if you're not sure you like the sound, you can use some refinement controls to further post-process the sound.

If this is AI, it's definitely a lazy one. I want my sound engineer (or even my AI sound engineer) to make the decisions. If I have to make all the sound decisions, then what's its real value?

I do believe a few of the tracks were improved…slightly. But my main, favorite tracks were clearly made muddier. I agonized over this for almost two weeks when, after sleeping on it, I finally came to the answer that was right for me.

I wanted the music to be my music. Especially in this time of generative AI, and especially since I work a lot with AI, I didn't want any implication that some machine wrote my music. So, if I wanted to be able to say, "No AI was used in the creation or mixing of this music," then AI mastering was not right for me.

Also: How to write better AI prompts

That said, while no AI was used in the creation or mixing of this music, I used the heck out of AI for the creation of album covers, website graphics, and related promotional text.

Album covers

The first step for which I turned to AI was in creating the album covers. In fact, it was the AI's work product that helped me refine the release strategy for my music.

I have 12 tracks completed, which is enough for a full album of music. Back in the days before social media, you'd want to release an entire album, especially if you were an indy musician.

But today, it's better to release music in what's called a waterfall release strategy. The idea is you do successive releases of singles, in order to get more play on social media and on digital platforms.

Also: How to use Midjourney to generate amazing images and art

I tried getting Midjourney to create covers for all 12 tracks, but I wasn't happy with some of them. On the other hand, I was ecstatic about four of them, as well as for what will eventually become the cover of the full album itself.

So I decided to release four EPs, three months apart, each with three songs. Each EP (extended play) will use one of the covers generated by Midjourney. Then after all four EPs and all 12 songs are released separately, I'll combine them into one LP album using another image generated by Midjourney.

The first of these EPs is "House of the Head." My prompt for Midjourney was simple. I just typed in:

/imagine house of the head

Also: The best AI art generators: DALL-E 2 and fun alternatives to try

I got back a variety of results:

Eventually, I settled on this cover (I added the text in Photoshop), which was compelling and reflected a lot of what I wanted to showcase not only in the song, but in the three tracks of this first EP.

There are four additional images I'll be using over the next year, but you'll have to wait and see those when they are released with the music.

Identifying musical genres

When you submit music to a distributor, streaming service, and even for promotion, you're supposed to know what genre the tunes fall into. I had no idea. I tend to like the music I like, but I've never really paid attention to the music genre. The last music class I took was music appreciation, back in junior high.

Also: ChatGPT's Custom Instructions feature is now available for everyone

But I did know the instruments used to create the various sounds I incorporated into my compositions. I fed a list of their names to ChatGPT Plus with the WebPilot add-on and asked this:

You are a music A&R executive. You like an album that uses rain sticks, flutes, cymbals, trumpets, saxophones, trombones, bass, electric, and acoustic guitar, congas, Tibetan bowls and chimes, rawhide shakers, an autoharp, a darbuka, a panpipe, and the one string Afro-Brazilian bow called the berimbau.

You'll also hear a wide variety of synthesizer effects, a Steinway Grand Piano, and even the sounds of rain and thunder.

List the two main genres and a sub-genre for each main genre music with these influences will be cataloged under. Use WebPilot if needed.

From that, I got back the following recommendation:

Is it perfect? Probably not. If I really were an A&R executive, I'd be able to fit the tunes into their proper categories. But since I'm no expert, the AI did give me a leg up. The songs were accepted, and so far, the playlist curators who've looked at the songs didn't feel they were in the wrong genres, so I'll call this another win for the AI.

Creating a music description

My next challenge was describing the music itself. Again, I turned to ChatGPT. This time, I fed it the instrument list above and then asked it this:

Imagine you're writing a feature article for a music magazine about the album 'House of the Head'. Describe the unique sound and musical experience the album offers, considering the wide array of instruments and the blend of electronica, worldbeat, and orchestral music.

And here's what it told me:

Creating a musician bio

If you plan to promote your music, you're going to need a bio. Now, as it turns out, I have a bunch of bios. I have my very pompous primary professional bio, and I've also had to create bios focused on certain areas of my work for various clients and speaking gigs over the years. But, without a doubt, none of them was suitable as a musician bio.

Also: 7 advanced ChatGPT prompt-writing tips you need to know

I wasn't entirely sure how to write a musician bio, but I knew that many artists on Spotify had them listed there. So I found three from artists I like listening to and captured their bios into a text file. Then I created this first of two queries:

I am presenting you with three example artist bios. Each individual bio is enclosed in >>> <<<. Analyze all three bios and discuss the characteristics common across all of them that make them particularly suited to describe musicians and artists.

I then embedded the three bios in >>><<< blocks and fed it all to ChatGPT. This is what I got back:

That's incredibly useful. After, I fed it both the music description from above, along with my long and pompous professional bio. Then I asked it this:

Now, take the professional bio and music description for David Gewirtz, and using the criteria and style identified in the musician bio format, write an artist's bio for David Gewirtz.

You can see the result on the House of the Head website, on the bio page. Since I used AI for nearly all the graphics on my music website, it's probably fair to point you there now.

We'll be spending some more time there next.

The three types of AI website support

Most musicians need a website for promotion. It's a place where all the information needed by journalists, fans, and playlist curators can go to find out about your music.

At this point, there are three main ways that AI can help with website creation. The first is creating the actual site's structure and look. The second is helping with the prose. The third is helping with images.

AI-based site creation

In preparation for this project, I looked at more than 10 vendors that offer AI-based website creation and I was very unimpressed. Most asked a few questions and then created a site based on the answers to the questions.

Also: ChatGPT Plus can mine your corporate data for powerful insights. Here's how

But the sites seemed very canned. I wouldn't be surprised if vendors created 50 or so pre-built templates for the most common site categories and then just generated the site from the pre-built template, calling it "AI".

That's why I'm not naming names. I expect that this area will see improvement eventually, but unless you want something very basic and fairly generic, AI website generation is not ready for prime time — yet.

For now, I decided it would be easier simply to build my own site using WordPress and a theme (I used Divi from Elegant Themes) and host the site on one of the servers I'm already paying for. For the record, I chose Divi because I've previously used it on another site, it's pretty good, and I have an already-paid license for it. There are certainly other excellent WordPress themes and even some that are musician focused.

AI-based marketing content creation

Here is where I relied very heavily on AI tools. You saw how I used ChatGPT to create a musician's bio. That's one page of the site, but it's an important page.

AI-based image creation

I used AI to create all of the site's original images. This includes the home page hero image:

Both the hero image and the album image were created in Midjourney. Then the final image was composited in Photoshop and the text added in Photoshop as well.

I also used AI to create the site's wide banner, as well as two more spotlight images on the bio page. There's a lot to unpack here, so I'm going to break things out into their own sections.

The AI-ification of the Musician Dave image

I have been notoriously camera-shy. This is odd for a guy who appears in a couple of hundred YouTube videos and spent a good part of the 2010s splattered all over network TV doing guest commentary, but it's true. In my younger days, if there was a camera at an event, I went the other way. As such, there are very few pictures of me as a younger man.

Also: 7 advanced ChatGPT prompt-writing tips you need to know

Back in the day, I was required to take a few publicity stills for work, and I did sit in front of a photographer on a few occasions. It was a difficult experience for everyone involved, and the pictures were mostly mediocre. I did get one good shot of me looking over sunglasses with a flag behind me, but that was taken more than a decade ago, back when I was doing a lot more work as a political pundit.

Recently, my way of generating images of myself has been to grab stills from my YouTube videos. This works well for tech-related images, like my Facebook profile picture. But musicians are supposed to have a lot more style.

Techie David could get away with being charmingly geeky. But musician David had to be cool.

Actually, this is worth a minute of serious discussion. Music imagery is both an art and a science. If you look at the publicity photos of most musicians, they sure don't reflect how the artists look before coffee on a Saturday morning. Instead, they're often stage or studio shots that are carefully crafted to project an image and a specific feeling.

Also: How (and why) to subscribe to ChatGPT Plus

They're meant to convey an impression of the artist that's not exactly tied to their day-to-day real life. For my album and future releases, I needed images that weren't reflective of the everyday me. I needed stylized images that fit the musician vibe.

To pull it off, I spent a lot of time in Midjourney, with the help of Insight FaceSwap and Adobe Generative Fill.

Midjourney for the main profile image

Midjourney allows you to upload an image, which the tool will then incorporate into its AI generation of new images. I started with this basic image of me talking into a mic, which I've been using as my social media profile image.

You upload an image into Midjourney by clicking the plus button. Once the image is uploaded, you right-click on it to get the URL, and then you paste that URL after the /imagine command in Discord (Midjourney lives inside Discord, as do a few other AI tools), followed by whatever prompt you want.

It took a lot of tries. Here are a bunch I ruled out.

You need a pretty thick skin and a sense of humor to do this with Midjourney. Many other results were even weirder.

But then I just appended cyberpunk after the URL and I got this.

It was perfect. It somehow (I'm sure it was random chance) picked a leather jacket that looks almost exactly like the one I've been wearing for a decade now. This image became the main image on my music site, and my profile avatar for the various streaming services that require you to specify an artist image.

Photoshop (beta) Generative fill for the banner

At the top of each page is a wide banner. The original Midjourney image just isn't that wide.

But I loaded the image into Photoshop, added more canvas space on either side of the original images, and Photoshop Generative Fill gave me a much wider image.

Also: How to use Photoshop's Generative Fill AI tool to easily transform your boring photos

Sweet.

Faceswap for the hacker image

Since my music bio talked about my techie roots, I decided that it needed a picture of me with a keyboard. Rather than starting with the previous radio show image I used to generate my hero image, I used the Midjourney URL of the actual hero image itself. I then fed Midjourney a ton of prompts until I arrived at:

/imagine typing on qwerty computer keyboard, cyberpunk, lightning, data center —ar 9:16

The -ar sets aspect ratio, which got me the tall image.

Like me, but not me.

The image above was fine, except it wasn't my face. To fix that, I used Insight FaceSwap. I uploaded another image of me that showed my face pretty well:

The real me. After coffee.

Then, using the /swapid command, I uploaded the hacker image from above and let FaceSwap do its magic. It returned this. It's subtle, but this is much more me than the previous image.

The original is on the left. The face match version is on the right. Because the one on the left is also based on my face, they're similar. But the one on the right is a bit more me.

Midjourney, FaceSwap, and Generative Fill for the car image

I wanted one more image on the bio page — a picture of me with my car. I drive a red Dodge Challenger, which I earned the right to drive by virtue of successfully passing into midlife while managing an ongoing stream of crisis experiences.

Using the same seed image technique, I told Midjourney this:

/imagine standing in front of red dodge challenger, cyberpunk

And I got back this:

It's close, but needs work.

As you can see, it's not really my face. And that hair. That was most definitely not my hair.

To fix it, I first used FaceSwap to put my face on my body. Then I saved the image onto my desktop, opened it in the Photoshop beta, and selected the area with the hair. Clicking the Generative Fill button, I told Photoshop to give me curly hair. I also had Photoshop Generative Fill clean up the street a bit and widen the image to fit the web page I wanted to use it on.

Also: These 3 AI tools made my two-minute how-to video way more fun and engaging

It took about 12 attempts, but then I got this image, which wasn't bad. All I did to tweak it was to add a bit of gray, and it now much more closely represents what I look like:

Before on the left, after on the right. The guy on the right is pretty much me. The guy on the left, not so much.

And here's how the final image came out:

Notice that there's more street on both the left and right sides of the image. That's Photoshop Generative Expand in action.

I know I just gave you a really fast description of how to do a face match, which is a problem many Midjourney users are trying to solve. Stay tuned. This article is too long for an extra in-depth how-to, but I plan to produce a guide on getting a perfect face match using Midjourney AI and FaceSwap.

Distribution, promotion, and all the rest

The interesting secret about the music business now compared to the 1990s and earlier is that distribution today is disintermediated on a mammoth scale. Streaming services make it possible for independent artists to reach a worldwide audience, mostly without gatekeepers adjudicating suitability for some marketing strategy or another.

The key to all of this is a category of cloud service called "music distribution services." For a shockingly nominal fee (as little as $10), you can get a track uploaded to all of the major services. I paid $49 to CD Baby to distribute my EP, and they did just that. It's now on Spotify, Apple Music, Amazon Music, and about 150 other streaming platforms.

Also: I used ChatGPT to rewrite my text in the style of Shakespeare, C3PO, and Harry Potter

Everything I've done in getting my EP out could have been done without an AI's help. I have a ton of product marketing experience (back in the day I was a product marketing director for a major software company). But while product marketing and producing music are similar trades, there is a lot of domain-specific music industry knowledge I didn't have going into this process. More than that, there's also the music industry jargon.

To become a verified artist on Spotify, for example, it was up to me to properly fill out all the forms, sign up for the artist account, and curate the music. I also went to Copyright.gov to register my copyrights online, and signed up with a performing rights organization (PRO) so royalties on radio play will be collected. There is still a lot of product management work to getting music out, but if you're willing to do the work, you can get distribution.

For a shockingly nominal fee (as little as $10), you can get a track uploaded to all of the major services.

The next step was promotion. I'm using a service called Groover to introduce my tracks to playlist curators, who then decide if they want to add my music to their playlists. So far, four such curators have added my tracks to their playlists, which means I'll be able to reach listeners beyond my circle of friends and social media followers.

As a good researcher, I was able to pick up the procedural knowledge necessary to get the music out. But ChatGPT and Midjourney were able to provide the stylistic notes for the project I wouldn't have otherwise been able to reproduce.

Also: Apple Music finally adds personalized recommendations, while Spotify expands AI DJ

While there's always some concern about AI usage, it's clear I wouldn't have done this project — at least at this time — without the help of the AI-generated analysis, images, and descriptions.

Over the next year, I intend to make four more "waterfall" releases (three more EPs and the full 12-track album). I composed all the tracks quite some time ago. Most of the descriptions, and all of the art, have already been generated by the AI tools.

And with that, the first wave of this project is done. If you want to listen to any of these tracks, point your browser to House of the Head and click directly into the music service of your choice.

Also: Everything you need to start a podcast: The best microphones, headphones, and software

So, does this project give you any ideas? Have you been wanting to distribute music and now have a better roadmap? Let me know in the comments below.

You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter on Substack, and follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Artificial Intelligence

Data visualization startup Virtualitics lands $37M investment

Data visualization startup Virtualitics lands $37M investment Kyle Wiggers 9 hours

Many companies grapple with data challenges. In a 2019 survey, Deloitte reports that 67% of executives aren’t comfortable accessing or using the data at their organizations. In a separate poll from NewVantage Partners, meanwhile, less than a third of firms identify themselves as being data-driven — despite significant investments in AI and business analytics tools.

According to Michael Amori, the problem often lies in tooling. He’s the co-founder of Virtualitics, a startup developing software to help companies visualize — and gain insights, with any luck — from their data.

“Common dashboard tools fall short of revealing the hidden insights buried in today’s intricate data,” Amori told me in an email interview. “And when bias, privacy and ethics are becoming even more important, having a solid understanding of the data, outliers and patterns, companies can create an environment of responsible usage.”

Virtualitics, launched in 2016, was born out of Caltech and NASA’s Jet Propulsion Lab in Pasadena. A few years ago, Amori was introduced to George Djorgovski, a professor of astronomy and data science at Caltech, and Ciro Donalek, a computational staff scientist at Caltech’s Center for Data-Driven Discovery, which Djorgovski was heading at the time.

“Donalek’s expertise in AI, particularly in aiding Caltech astronomers with big data analysis, and his work in creating collaborative virtual spaces converged,” Amori said. “Virtualitics was born from this, focusing on three-dimensional visualizations to elevate data analysis beyond traditional methods.”

At a high level, Virtualitics uses 3D visualizations, knowledge graphs and AI to expose the relationships between different points of data. Given a data set (or several), optionally along with a question in plain English (e.g. “What drives credit card skimming?”), the platform can generate annotations and explanations, which can then be embedded in reports and dashboards and shared with stakeholders across an organization.

A customer in the financial industry could, for example, use Virtualitics to spot patterns of payment and wire fraud. Or a marketing company could leverage the platform to identify emerging customer segments and the marketing channels most likely to perform best.

But plenty of business intelligence tools visualize data, including Bayes, which Airtable acquired in 2021, and London-based, Canva-owned Flourish.

So what makes Virtualitics different? Its data visualizations can be viewed in VR and AR, for one. But Amori argues the platform’s also simpler to use and more powerful than most solutions on the market — and, perhaps most important of all, doesn’t require deep technical expertise.

Virtualitics

Virtualitics’ 3D-centric data visualization platform.

“Traditional data exploration tools have limited capabilities in identifying and visualizing the complexity of today’s data,” he said. “Also, traditional analytic techniques and dashboards fall short in providing visually intuitive outputs, making it hard to truly understand what the findings mean or predict what lies ahead. All of this is combined with the fact that humans approach a data set already with a bias, a preconceived notion of what might be going on in the data and then they explore the data to see if their hypothesis was correct.”

The jury’s out on all that. But it’s certainly true that companies often struggle to drive internal use of whatever business intelligence software they’ve invested in.

In a 2020 business intelligence survey from 360Suite, companies said that the main challenges they face are managing user adoption and data quality control. Cost control and security were cited as the other major blockers in achieving data analytics goals.

“While Virtualitics may occasionally find itself categorized alongside business intelligence tools, our approach is substantially different,” Amori said. “Traditional business intelligence tools are built to ‘report the news,’ with the aim of making data more accessible through simple dashboard reports.”

In a testament to Virtualitics’ success — or at least the strength of its marketing efforts — the company’s year-over-year revenue has increased 370%. Amori credits Virtualitics’ recently acquired government sector customers, which include the Department of Defense, with the growth.

“Virtualitics has partnered with the defense and national security community since 2017 on projects ranging from operational readiness, investment analysis and mission support and intelligence analysis, among others,” Amori said, noting that Virtualitics recently appointed a retired U.S. Army General, John Murray, and former U.S. Navy Vice Admiral, Timothy White, to its advisory board.

Gearing up for the next phase of expansion, Virtualitics today announced that it raised $37 million in a Series C funding round led by Smith Point Capital with participation from Citi and advisory clients of The Hillman Company. Bringing the startup’s total raised to $67 million, Amori says that the new cash will be put toward collaborations, customer success efforts and expanding Virtualitics’ headcount (which currently stands at 76 people).

“The motivation behind raising a Series C funding round was driven by two key factors,” Amori said. “First, our company has a strong track record of successfully collaborating with the Department of Defense on mission-critical programs, and this aspect of our business continues to experience significant growth and expansion. However, we also recognize the increasing demand for AI-driven analysis within the enterprise sector, as data size and complexity continue to grow exponentially. With the new funding, we will be able to accelerate our roadmap, integrating more AI and specifically generative AI technology into our platform, and further scaling our business to meet the evolving needs of our customers and the market.”

Nvidia Expands Its AI Strategy With New Hugging Face Integration, Enterprise 4.0, AI Workbench

Nvidia Expands Its AI Strategy With New Hugging Face Integration, Enterprise 4.0, AI Workbench August 10, 2023 by Jaime Hampton

Nvidia once staked its entire future on the promise of artificial intelligence, Nvidia CEO Jensen Huang told an audience at SIGGRAPH in Los Angeles this week.

“Twenty years after we introduced to the world the first programmable shading GPU, we introduced RTX at SIGGRAPH 2018 and reinvented computer graphics. You didn’t know it at the time, but we did, that it was a ‘bet the company’ moment,” Huang said during his keynote.

The RTX was Nvidia’s reinvention of the GPU, designed to unify computer graphics and artificial intelligence in order to make real-time ray tracing feasible: “It required that we reinvent the GPU, added ray tracing accelerators, reinvented the software of rendering, reinvented all the algorithms that we made for rasterization and programmable shading,” said Huang.

While the company was transforming computer graphics with AI, it was also reinventing the GPU for a brand new age of AI that had not fully come into view until recently. Continuing its progress in enabling AI development in this brave new world, Nvidia announced a flurry of new products and services at SIGGRAPH, including a Hugging Face partnership, an update to its AI Enterprise software, and a new developer toolkit called AI Workbench.

Generative AI Takes Center Stage

AI is a household concept thanks to the explosive popularity of ChatGPT and similar generative AI models that are completely changing the tech landscape. Nvidia has been a critical player in this new market with its developer-focused technology.

“Nvidia is a platform company. We don’t build the end applications, but we build the enabling technology that allows companies like Getty, like Adobe, and like Shutterstock to do their work on behalf of their users,” said Nvidia’s VP of Enterprise Computing, Manuvir Das, in a press briefing.

Pre-trained foundation models like GPT-4 and Stable Diffusion can serve as a good starting point for building applications, but customization is key for businesses building their own AI models, Das said, noting that customization and fine-tuning go a long way in determining the efficacy and the output of a model.

Many foundation models are trained with large datasets of public data, leaving them prone to hallucinations and less accurate outputs. Using domain-specific data can vastly improve accuracy, which is a major priority for generative AI use cases in sectors like healthcare and financial services where every detail counts.

Das said Nvidia views leveraging generative AI as a three-step process. The first step is having the right foundation models, trained over months with large amounts of data. The next step is customization, which can be complex: “There are many, many techniques for how to customize, but essentially, you’re producing a model that is infused with domain data, relevant data, and examples, so it can do a much better job,” Das said. The third facet of AI deployment is embedding the models through an API into applications and services to take them into production. Here are Nvidia's newest tools for this three-step process.

Integration with Hugging Face

A new partnership will bring the AI resources of Nvidia’s DGX Cloud, its AI computing platform, to the popular open source machine learning platform Hugging Face.

Those developing large language models and other AI applications will have access to Nvidia's DGX Cloud AI supercomputing within the Hugging Face platform to train and tune advanced AI models. Hugging Face says over 15,000 organizations use its platform to build, train, and deploy AI models using open source resources, claiming its community has shared over 250,000 models and 50,000 datasets.

The DGX Cloud-powered service, available in the coming months, is Hugging Face’s new Training Cluster as a Service meant to simplify creating new and custom generative AI models using Nvidia’s software and infrastructure. Each instance on the DGX Cloud features eight Nvidia H100 or A100 80GB Tensor Core GPUs for a total of 640GB of GPU memory per node.

“People around the world are making new connections and discoveries with generative AI tools, and we’re still only in the early days of this technology shift,” said Clément Delangue, co-founder and CEO of Hugging Face in a statement. “Our collaboration will bring Nvidia’s most advanced AI supercomputing to Hugging Face to enable companies to take their AI destiny into their own hands with open source and with speed they need to contribute to what’s coming next.”

Nvidia AI Enterprise 4.0

The company also announced Nvidia AI Enterprise 4.0, the latest version of its enterprise platform for production AI that has been adopted by companies like ServiceNow and Snowflake.

"This is essentially the operating system of modern data science and modern AI. It starts with data processing, data curation, and data processing represents some 40, 50, 60 percent of the amount of computation that is really done before you do the training of the model," Huang said.

Enterprise 4.0 now includes several tools to help streamline generative AI deployment. One new addition is Nvidia NeMo, a framework the company launched last September that contains training and inferencing frameworks, guardrailing toolkits, data curation tools, and pretrained models.

Another new inclusion for AI Enterprise 4.0 is the Nvidia Triton Management Service which automates the deployment of multiple Triton Inference Server instances in Kubernetes. Nvidia says the new service enables large-scale inference deployment through efficient hardware utilization. The software application manages the deployment of Triton Inference Server instances with one or more AI models, allocates models to individual GPUs and CPUs, and efficiently collocates models by frameworks.

Nvidia Enterprise is supported on the company’s RTX workstations which have three new Ada-generation GPUs: RTX 5000, RTX 4500, and RTX 4000. The 48GB workstations can be configured with AI Enterprise or Omniverse Enterprise.

Nvidia AI Enterprise 4.0 will also be integrated into partner marketplaces, including AWS Marketplace, Google Cloud, and Microsoft Azure, as well as through Nvidia cloud partner Oracle Cloud Infrastructure, the company said.

AI Workbench: A New Developer Toolkit

Finally, Nvidia unveiled its AI Workbench, a toolkit for creating, testing, and customizing generative AI models on a PC or workstation and then scaling them to any datacenter, public cloud, or DGX Cloud.

There are hundreds of pretrained models now available, and finding the right frameworks and tools when building a custom model for a specific use case can be challenging. Nvidia says its new AI Workbench lets developers pull together all necessary models, frameworks, software development kits and libraries from open source repositories (and its own AI platform) into a unified developer toolkit.

Nvidia says its AI Workbench streamlines selecting foundation models, building a project environment, and fine tuning these models with domain-specific data. Developers can customize models from repositories like Hugging Face and GitHub using this custom data and then share the models across multiple platforms.

“This Workbench is a collection of tools that make it possible for you to automatically assemble the dependent runtimes and libraries, the libraries to help you fine tune and guard rail, to optimize your large language model, as well as assembling all of the acceleration libraries, which are so complicated, so that you can run it very easily on your target device,” Huang said in his SIGGRAPH keynote.

For those many companies who are also placing large bets on generative AI, Das says the real power of Nvidia’s platform lies in its flexibility.

“What we really believe in at Nvidia is, once you produce the model, you can put it in a briefcase and take it with you wherever you want,” Das said. “And all you really need is the runtime from Nvidia that you can take with you so that you can run the model wherever you want to run it. And as you deploy the model, you want that software to be enterprise-grade so that you can bet your company on it.”

Related

Understanding the future of smart cities through data science

The concept of smart cities is to use advanced technologies to minimize traffic congestion, manage waste better, and improve the quality of life for people. Data science will play a critical role in managing intelligent cities. It will help avail insights to help city managers make data-driven decisions. Big data will offer a unique opportunity for running sustainable and livable cities.

City dwellers will benefit from minimum energy use, less pollution, and better air quality. The development of the cities faces different challenges such as competition of resources and data privacy issues. Urban managers and dwellers will use various AI-driven techniques, systems, and processes to get real-time analysis and reports to understand actual happenings.

Understanding the future of smart cities through data science

Image Credit – Pexels

How data science will help intelligent cities become smarter and more efficient

Advancing data analytics will offer unprecedented chances for urban environments to increase sustainability, resilience, and livability. Leveraging data-based insights will be critical when making informed decisions. City management authorities will rely on data to improve traffic flow, manage energy distribution and use, waste, and make plans for smarter infrastructure.

Data analytics in smart cities will provide quick and dependable ways for analyzing raw data to help understand the real-time dynamics of cities. It will be useful in enabling planning and developing new adaptations for challenges. Efficiency is important in advanced cities. Big data will help minimize pollution to the environment and increase the quality of air. Urban management will effectively supply the right amount of energy required to keep systems running and buildings livable.

As smart cities develop more, society needs to address a variety of data privacy and security risks. Resolving them in a smart security environment needs a holistic approach. WIFI connection will play a major role in intelligent cities but various WIFI security issues might arise. One of them is that this network is blocking encrypted DNS traffic, especially when a user gets a new WIFI connection. This is a common issue that affects network security on Mac.

If you fail to resolve privacy warning in Mac you could compromise your data and privacy. One of the ways to resolve this network blocking encrypted DNS traffic is to restart your gadget, reconnect your WIFI, or update your gadget. DNS blocking happens when a company tries to prevent DNS encryption so that it can snoop on your data. That is when your gadget’s OS displays the message network is blocking encrypted DNS traffic. You might want to use a VPN, reconnect your internet, or change your connection password to resolve this issue.

Understanding the future of smart cities through data science

Image Credit- Pexels

Benefits of using data science in smart cities

The effects of climate change are impacting every sector of society. Globally, societies are experiencing challenges such as:

● Hotter temperatures

● Poorer food production

● Diminishing human, animal, and plant health

● More species going extinct

● Increasing poverty, droughts, and live threatening storms

One of the aims of developing intelligent cities is to find solutions to these challenges. Scientists are finding ways to minimize the production of CO2 and improve human life. The aim of data-driven smart cities is not only to minimize CO2 emissions but also to provide a variety of benefits to urban dwellers.

Produce more energy and use less. Smart city technologies aim to save more energy in a wide variety of ways.

Ensure there is cleaner air for urban dwellers. Smart urban planning authorities will use technologies to measure air quality and understand sources of pollution.

Enhanced transportation. City data analysis and systems in smart cities aim to optimize mobility in urban places. Data will help pinpoint challenges in transportation systems, minimize congestion, and provide real-time traffic updates.

Enhanced waste management. AI-driven city management will help gather data across ecosystems for waste recycling, repair, and reuse. Big data will help reduce waste production and management of waste delivery channels.

Enhanced public safety and quality of life. Data privacy in smart cities will help security teams keep an eye on real-time happenings across streets and buildings using AI-driven cameras.

Understanding the future of smart cities through data science

Image Credit – Pexels

Challenges that cities face in implementing data-driven solutions

Terabytes of data can be generated daily which is important for improving efficiency in smart cities. However, big data poses a major storage challenge to both City management and dwellers. The data generation, processing, and storage systems are prone to cyberattacks.

International and local policies for data privacy and sharing keep changing which poses a major challenge to companies, governments, and individuals. As urban technology improves, legislation needs to change. The current laws are full of loopholes that hinder the swift implementation of smart city policies. There needs to be greater connectivity and efficiency but intelligent use of big data is currently lacking.

Conclusion

The realization of smart cities is approaching fast as societies become more intertwined with urban technology. Big data is playing a major role in speeding up the pace and improving efficiency, and quality of life. Still, there are several drawbacks that the current generation has to deal with. They have to address issues of cybersecurity, data privacy, CO2 emissions, legislation, and economic improvements of the people.