This is an excerpt from the first issue of AI Insider – a freshly-baked digest for all things at the intersection of UX and AI! Subscribe and join our mailing list here.
There’s a lot happening in the world of AI. But what do these changes mean for you?
We'll be diving into the AI world every month, fishing out the juiciest AI updates and helping you From groundbreaking AI tools that could revolutionize your workflow, to the latest debates on how AI is reshaping our field – we've got you covered.
AI <> UX Updates | July 2024
What’s New?
- Uizard 2.0 is out!
- Claude climbs to the top of the LLM mountain
- Apple is building AI, with emphasis about user privacy
- Looppanel can now auto-tag your data
Uizard Autodesigner’s latest update
Uizard, the AI-powered UI generator, has released version 2.0 version that lets you create UI elements with text prompts and use a chatbot AI-assistant to iterate on designs quickly. Check it out here.
Why is this a big deal?
We tested out Uizard’s latest capabilities for you, of course. Here’s the TLDR:
- It’s definitely the best AI UI generator out there, but it’s not groundbreaking (yet). Templates and UI kits are still better to get started with if you’re actually designing something.
- Their ‘Chat with AI’ is pretty handy though—you can ask AI to make specific changes to your design (e.g., change the button styles) which allows you to iterate very quickly.
- They’ve released the ability to generate images from text prompts—this is handy for internal mockups.
Find the detailed review here.
Why should UX-ers care?
Uizard is on a promising path. If you’re an early adopter, you should already be tweaking around with it. If you’re waiting for the tools to be fully baked, you can give it a few more months but this is getting closer and closer to an AI+UX design future. I expect 1 year from now we’ll be using AI for generating the first versions of a design. Today we can already start using Uizard to iterate on designs with text-based prompts, allowing iteration to move much, much faster.
Claude 3.5 becomes best of LLMs
Anthropic just released Claude Sonnet 3.5, an update to their LLM model, and it has been crowned king of LLMs by critics and AI enthusiasts already. It’s more intelligent and faster than other models out there (yes, including GPT!). It is also available at a much cheaper price than its competitors. Check it out here.
Why is this a big deal?
- This model is genuinely better and smarter than any we’ve tried before, especially for writing-based tasks
- Sonnet can process visual information. It can interpret charts, graphs, and even handwritten text. We tested its visual processing capability—it’s very good with a well structured digital document (e.g., a UI image, graphs from PDFs), but it’s okay with transcribing hand-written text (needs to be verified for accuracy by a person).
- You can now use Claude to generate code, SVG images, written documents, and diagrams. These are called ‘artifacts’.
Why should UX-ers care?
New use cases that have opened up / improved:
- You can write even better reports, blog posts, or emails than we’ve seen before
- You can use Claude to process UI images, analyze graphs for you (e.g., survey responses), or digitize hand-written notes / sketchessome text
- For UX-ers working in fields with large amounts of visualization or hand-written text (e.g., survey tools, logistics companies), new use cases may be opening up for your customers.
- Honestly, I’m most excited about ‘Artifacts’. They’re quite useful if you want to create a quick visual, test different UI views out, iterate on them quickly, or even launch a website rapidly. Below is our output from a quick test where we asked Claude to create a modern looking nav bar in black and white (we iterated on it a couple of times, but it’s not bad!). Claude’s output is actually in HTML here, but it gives you a visual preview of the output as well. This is another example of us getting closer to text prompt based UI design!
- You can also use Claude to generate SVG images, but the quality right now is pretty poor. This is its rendering of a dog:
Apple catches up with the AI revolution
At their recent Worldwide Developers Conference (WWDC), Apple unveiled Apple Intelligence, a new AI system integrated into iPhone, iPad, and Mac ecosystems.
Here’s a recap of all the updates.
Why is this a big deal?
Many reasons. Apple has been one of the biggest brands to enter the AI game, but its approach to it has still been uniquely Apple. For one, Apple is still focusing on delivering the best possible experience for their users. For another, they’re looking at this with a privacy-first lens, with most AI features executing locally on your device, instead of relying on an external service. This keeps your data secure and only visible to you.
Powerful AI foundation models need a LOT of energy, and can tire out your standard smartphone very quickly. So they’ve limited the update to newer iPhones & iPads, and used ‘travel-size’ language models that’ll work faster, even though with lesser capabilities. It can’t build you an app, but it can write emails for you.
For heavier workloads, they use another set of models hosted on something called Private Cloud Compute. Again, privacy first.
Apple has also been realistic—its server models can’t still keep up with Chat-GPT or Anthropic. So they’ve set up a partnership with OpenAI to max out on AI power.
Why should UX-ers care?
Apple’s launch is a lesson in how important user trust is for products in the near future, especially with LLM and AI-powered tools. People are wary of sending their personal data off to some nebulous cloud, so they developed Private Cloud Compute–a super-secure server infrastructure designed to handle the heavier AI workloads that can't be done on your device. Highly focused on privacy and security–it uses stateless computation, which means your data isn't stored persistently, it's processed and then immediately discarded.
As the launch keynote insisted, even Apple can’t access your data if it wants to. Trust us with your data, that’s all that users need to hear again and again.
Looppanel launches auto-tagging
Sorry to toot our own horn, but this is a pretty big breakthrough for UX researchers everywhere!
The Looppanel team launched an Auto-tagging feature earlier in June, it’s a game-changer.
Why is this a big deal?
Looppanel can now analyze all your notes and interviews in minutes and automatically group them by key themes and topics. All you need to do is review them.
Why should UX-ers care?
With Auto-tagging, it’s like having an incredibly smart (and tireless) research assistant at your beck and call.
Instead of spending hours slogging through transcripts and manually tagging your qualitative data, you can get AI to do it in minutes.
All the manual, soul-crushing repetitive tasks get done by AI, so researchers can spend time on strategic analysis and deriving insights from the data.
This is what one user had to say about it—
“For teams working in UX, this tool is an absolute game-changer. It will make your team 1000x more efficient – and I'm not exaggerating.”