The Best Interface to AI is ‘No Interface’

After two years in the IT industry, 23-year-old Suyash realised coding wasn’t his forte. He wanted to pivot towards something more user-facing, where his creative abilities stood a chance, with a low barrier of entry. He didn’t want to spend another year or two studying a new degree.

Suyash picked up UX design and joined one of the more popular boot camps in the country. The program cost was around a lakh, which was more than half of what he had saved up.

However, concerns began to loom over Suyash just two weeks into the program. Around this time, Figma, the ubiquitous design tool, launched several AI-integrated features, accompanied by the release of other tools positioned to replace most of the work junior designers were expected to do.

Not just Suyash but several other users were left wondering if there was a ChatGPT moment on the horizon for the world of product design. But there is more than what meets the eye.

AI is an Option, Not a Step in the Process Yet

Recent AI integrations in Figma speed up the ideation process by generating UI concepts with a simple prompt. After they faced a backlash because the generated design looked eerily similar to screens from iOS, they fixed it in the latest iteration available for beta. The feature is called First Draft, and at first glance, it is quite impressive.

It does give users a headstart by quickly generating an interface with a design system. A review from a popular UX blog, DesignerUp, stated, “It handles auto layout perfectly, creating nested and component designs seamlessly.”

The components and elements in the generated interface are all a part of a design system too. This means that Figma will let users understand what is truly going on in the design process.

Given that most entry-level jobs involve ideation and low-fidelity prototypes, the advent of AI in Figma does spark a sense of fear.

The tool, however, has its fair share of challenges.

For one, the review mentions issues with creating more than a single screen. “After trying multiple prompts to generate more than one screen, it failed on all accounts. This, to me, is one of the biggest limitations and drawbacks right now – its inability to generate full flows or add additional screens.” Moreover, Figma also isn’t capable of producing multiple screens with consistent styles and elements, which is a classic AI problem.

Moving beyond Figma, one does have tools with all sorts of capabilities like Galelio, Uizard and Motiff, among other prompt-based tools.

AIM spoke to Likitha S, a freelance designer with over five years of experience, to understand a first-hand perspective on such tools.

One of the first concerns she raised was that prompting the AI to get exactly what she wanted took a lot of time and often required her to lay down the ideas herself.

Speaking about her experience of using a prompt to prototype tool, she said, “A lot of it was devoid of UX thought – but I can see it improving if the underlying model gets better.” Likitha (who goes by the first name) asserted there is a lot to be done before AI becomes a step in the design process instead of just being an option that one can use.

Despite its limitations, it is fair that AI in UX instils a sense of fear. So how does one stand out? What is the opportunity to augment the user experience in current AI systems?

The Treasure Chest Is Yet to Be Found

There is no better person to answer these questions than product designer and author Akshay Kore. Well before tools like ChatGPT and Claude were released, Kore wrote a book in early 2022 titled ‘Designing Human-Centric AI Experiences’, where he outlines the best practices for designing AI systems that offer safe, competent, and a trustworthy user experience.

There is indeed a massive potential to build more interfaces, and Kore illustrates it uniquely. “You don’t really want to set an alarm. You want to wake up on time. You don’t really want to write a PRD. You want to communicate effectively and build the product.”

Great UX and design seems like a bit of a lost art in 2024 for new builders and founders. What can we do to fix this?

— Garry Tan (@garrytan) April 25, 2024

Technology is a medium to translate intent into an outcome at a high level. Essentially, most layers, buttons and screens are all instances of technology getting in the way of people and their goals.

“We’re sort of getting very close to that original vision of computing,” Kore said.

While Anthropic, OpenAI, and Gemini offer multimodal input read capabilities, Kore says there surely are opportunities to re-imagine these interfaces. Several developers are already working on building interfaces that enhance the efficiency of handling multimodal communication.

🎨 Introducing Prompt Canvas — a novel UX for prompt engineering
Building LLM applications requires new and dedicated tools for prompt engineering. With Prompt Canvas in LangSmith, you can:
• Collaborate with an AI agent to draft, refine, and edit your prompts
• Define custom… pic.twitter.com/mLY6aLwlyr

— LangChain (@LangChainAI) November 12, 2024

If one wants to make the most of the AI era, Kore says designers will need to start embracing these tools. He also says that good communication skills and the ability to convey ideas effectively between the team are of paramount importance.

Moreover, he also believes that early-stage designers carry an advantage as they will not have to carry the baggage of legacy and senior designers and can explore newer perspectives, tools, and techniques that redefine traditional design processes. Seasoned designers may experience some friction when using new technologies and AI tools.

“We still don’t know who an AI-first designer is,” he added.

You’ve Got to Run a Triathlon Of Sorts

Both Likitha and Kore believe that AI is here to automate some of the more boring and repetitive parts of the design process. But are young designers getting a chance to unleash their judgement, reasoning and decision-making skills?

AIM spoke to Shreyas Satish, founder and CEO of Ownpath, a design training and consultancy firm. Satish believes that AI will blur the lines between a designer, a front-end developer, and a product manager in the future.

"A designer, an iOS engineer, and a web engineer. Are you getting it? These are not three separate people. This is one person. And we're calling it a design engineer." pic.twitter.com/SCvoCNeHYo

— Austin Valleskey (@austinvalleskey) September 27, 2024

“Somebody who bridges that gap between design and engineering is comfortable, at least with some level of coding, to generate working prototypes with AI engine is another generalist role I’ve seen coming up,” he said.

Recently Sahil Lavingia, founder of Gumroad, took to X and said, “We’re no longer hiring designers. Only design engineers.”

Both Kore and Satish agree that user research will also start gaining utmost importance as novel AI products come into the picture.

“If you take a lot of the software, it’s pretty much one size fits all kind of…products, but there is a huge opportunity to pick very specific customer segments and design personalised experiences,” Satish further said. “Remember, there are a lot of people who have yet to come onto the internet in India itself. So again, research plays a huge role in understanding how they view these devices and how they can benefit from it.”

Moreover, the ability to sniff out problems is also going to be highly valued. Users may also not be able to explain their problems and what exactly they want. It reminds us of Henry Ford, founder of Ford Motor Company, who once famously said, “If I had asked people what they wanted, they would have said faster horses.”

The post The Best Interface to AI is ‘No Interface’ appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...