beginning

Will Fusaka Change How We Build and Use Web3 Mobile Apps?

It's highly likely that we're witnessing the implementation of features that will redefine how Web3 applications are built. The Ethereum Fusaka (Fulu-Osaka) fork should be available on mainnet in early December 2025.

12 critically important EIPs have been approved and are currently being deployed on testnets, but I'm most interested in EIP-7951, which adds native support for ECDSA signature verification on the secp256r1 elliptic curve, also known as P-256. It sounds incredibly complicated, but this is a crucial change for Web3 adoption among regular users. So let's unpack this topic a bit more deeply.

What is P-256 and Why Does It Matter?

The P-256 key is an encryption and authorization method used by Apple Secure Enclave, Android Keystore, and WebAuthn devices. Its introduction means that devices using this method will be able to directly authorize transactions on the Ethereum network without needing to store a private key in the application.

To understand this, imagine the difference:

Today: Your cryptocurrency wallet key is like a safe key stored in a drawer. It can be found, stolen, or lost.

After EIP-7951: Your key is locked in an indestructible, unextractable chip in your phone. You can use it (through Face ID), but you can never remove it—even if someone steals your phone and hacks the system.

This change is fundamental because P-256 is a standard used by billions of devices worldwide: every iPhone since 2013, most Android phones, YubiKeys, TPMs in laptops, and even some credit cards. Until now, Ethereum didn't support this—it required its own cryptographic curve (secp256k1), which has no hardware support in consumer devices.

The Missing Link in Account Abstraction

In short: EIP-7951 is the missing link in Account Abstraction, or more precisely, the introduction of support for signing Account Abstraction transactions through devices without needing a locally generated wallet with seed phrases.

What is Account Abstraction?

Account Abstraction (AA) is a technology that replaces the traditional "account-key" model with a "smart wallet" model. In practice, this means your Ethereum wallet is no longer just a key pair, but a full-fledged program running on the blockchain with its own logic and rules.

Until now, AA solved many problems, including the ability to recover your wallet through friends, paying for transactions in tokens instead of ETH, and batching multiple operations into one. However, it still required having a seed phrase—those 12 or 24 words you have to write down and store in a secure place. EIP-7951 resolves this final obstacle.

How Will This Work for Regular Users?

Imagine you're installing a new cryptocurrency app. The experience will be dramatically different from what exists today.

Installation and Setup Takes Just Two Minutes

When you download the app and open it for the first time, the app asks for Face ID or Touch ID permission, just like any other banking app you might have installed. That's it. You're done. You have a wallet. There are no 12 words to write down, no warnings screaming "NEVER SHARE THIS WITH ANYONE", no stress about secure storage, and no risk of forgetting passwords that could cost you everything.

What happened behind the scenes is quite remarkable. Your phone generated a P-256 cryptographic key directly in the Secure Enclave on iPhone or Keystore on Android. This is a special security chip from which the key can never leave. Even if someone hacks your phone, roots it, or steals it completely, they cannot extract this key. It's physically isolated in hardware designed specifically to resist extraction.

The app also calculated your "smart wallet" address on the blockchain. Think of it like a bank account number. You can immediately show it to friends so they can send you cryptocurrencies, even though the actual smart contract wallet hasn't been deployed yet. The address is deterministic and predictable, which means it exists as a valid destination even before the wallet contract is created on-chain.

Sending Money Becomes Effortless

When you want to send money, you click "Send" and enter the recipient's address and amount. Face ID appears on your screen, exactly like when you use Apple Pay to buy something. You authenticate with your face or fingerprint, and the transaction is sent. Done.

There's no typing of complex passwords, no copying and pasting of long cryptographic keys, no confirming the same action across three different screens, and no nagging fear about whether you typed everything correctly. The experience feels identical to sending money through Venmo or Cash App, except you're using blockchain technology with full self-custody of your funds.

Wallet Recovery Through Social Recovery

This is where the magic happens, connecting EIP-7951 with Smart Contract Wallets in a way that fundamentally changes the security model.

Consider the scenario where you've lost your phone or you're upgrading to a new device. In the traditional cryptocurrency model, if you don't have the 12 words from your seed phrase written down somewhere safe, you've lost all your money. There's no customer service to call, no password reset button, no recovery process. The funds are simply gone forever.

The new model with Social Recovery works completely differently. When you first set up your wallet, you chose three to five "guardians." These could be your partner, a trusted friend, your second phone like an iPad, a professional recovery service, or even a family member who has their own crypto wallet. The key insight is that you're distributing trust rather than concentrating it entirely in your ability to safeguard 12 words.

When you lose your phone and install the app on a new device, you select "Recover wallet" and provide your old wallet address. The app then sends a request to your guardians, who receive a notification saying something like "John is trying to recover his wallet." Two out of three guardians must approve this request in their own apps with a simple click. After a 48-hour security waiting period, your wallet is recovered with a new key on the new phone. All your money remains safe, the wallet address stays the same, and everything continues working as before.

Why is P-256 Better Than What We Have Now?

The current Ethereum standard, secp256k1, is not supported by any common security hardware in consumer devices. This creates a cascade of problems that have plagued cryptocurrency adoption for years.

Applications using secp256k1 must generate keys in software, which is inherently less secure than hardware key generation. They must store these keys somewhere in the application's memory or encrypted storage, which means the keys can potentially be extracted by sophisticated attackers. They must ask users to write down 12 words and keep them safe forever, creating an enormous barrier to entry. And they must hope that users don't lose those words, which happens distressingly often—given that around 20% of all Bitcoin is estimated to be permanently lost due to forgotten or misplaced seed phrases.

P-256 solves all of these problems simultaneously. The key is generated by a dedicated security chip built into the device, not by software. It never leaves the device—it's impossible to extract even with root access or sophisticated hacking attempts. Users see no words and don't need to write anything down, as everything is handled transparently through biometric authentication. And if someone loses their phone, it doesn't mean losing their wallet, thanks to social recovery mechanisms.

Fusaka and the Shift of Adoption Weight to Mobile Devices

Here we arrive at perhaps the most revolutionary consequence of this entire upgrade: Fusaka might shift the center of gravity for Web3 adoption decisively toward mobile devices.

For the past several years, we've observed a strange disconnect in Web3 development. Desktop browsers and MetaMask dominated the development landscape. The vast majority of decentralized applications, developer tooling, and infrastructure was built with desktop users in mind. But it ran directly counter to how the rest of the world actually uses the internet.

Mobile devices dominate global internet usage. Over 60% of all internet traffic comes from mobile devices. For users under 30 years old, that figure exceeds 80%. In developing countries across Africa, Southeast Asia, and Latin America, the first internet access for millions of people comes through smartphones, not computers.

The combination of EIP-7951, Account Abstraction, and mobile hardware security creates something that hasn't existed before: mobile devices with a genuine advantage over desktop for Web3 interactions. Every iPhone and Android phone manufactured in the last decade includes a Secure Enclave or Keystore—dedicated hardware designed specifically to generate and protect cryptographic keys. Suddenly, mobile devices aren't just adequate for crypto security—they're actually superior to most desktop setups.

Final Thoughts

EIP-7951 represents far more than a technical upgrade to Ethereum's cryptographic capabilities. It's the culmination of several years of parallel development in Account Abstraction, mobile hardware security, and user experience design. The combination creates possibilities that simply didn't exist before.

The convergence of Account Abstraction's programmable wallets, EIP-7951's hardware signing with P-256, and Social Recovery's distributed trust model produces something genuinely new: self-custodial wallets that match the usability of custodial services without any of their drawbacks. Users maintain complete control without requiring technical expertise. Security improves through hardware isolation while UX improves through biometric authentication. Recovery becomes possible without seed phrases through social mechanisms that distribute trust.

For Web3 as an ecosystem, Fusaka could mark the transition from niche to mainstream—from technology that early adopters tolerate despite poor UX to technology that regular people choose because it works better. This is the moment where blockchain stops being something you need to learn and becomes something you just use.

From Chat to Swarms

Not so long ago, the future of AI felt like chatting with a clever machine. We typed prompts, the model replied, and for a moment it seemed like magic. But if you've ever tried to use an LLM for real work—shipping code, running research, or managing projects—you know the truth. Conversation alone doesn't get the job done. It's like asking a colleague for advice when what you really need is a team that rolls up its sleeves.

That's where the story begins to shift. Instead of managing every step through prompts, we are learning how to delegate—not to a single model, but to a swarm of autonomous agents working together. What once sounded like science fiction is quickly becoming a practical reality. And just as graphical interfaces once replaced command lines, agent swarms are poised to transform how we work with digital tools, run experiments, and even build entire systems.

A Short History, Told Quickly

It began with chatbots—scripted if/then trees that pretended to converse. Then came LLM chat: far more natural, but still reactive. You asked; it replied. Useful, but cognitively heavy—you had to break work down into steps, write long prompts, and micro-manage outputs.

The next step is different: agents that plan, choose tools, and act on your behalf. A recent GAO report frames it clearly. Agents can operate autonomously to accomplish complex tasks and make time-critical decisions, potentially reshaping entire workflows.

The Agent Leap (and Why It's Happening Now)

Think of an agent as a digital teammate. It designs its own workflow, uses APIs, tools, and services as its "hands and eyes," and adapts when things change. Modern stacks give agents the ability to break a request into steps, call other services, retry when something fails, and report back with results.

IBM describes them well: agents automate multi-step goals by deciding, problem-solving, and executing—not just predicting text. And when you group them, you get agents coordinating as a team. One plans, another executes, a third reviews, a fourth checks compliance. It's not one giant model trying to do everything—it's a division of labor, closer to how real human teams work.

How Swarms Actually Work (Without the Hype)

The structure is surprisingly familiar:

Role specialization. Agents are assigned narrow jobs (fetch data, write copy, test code, check compliance).

Orchestration. Either a "lead" agent delegates tasks (hierarchy), or peers coordinate directly (decentralized).

Shared memory. Agents write to and read from a common workspace so context isn't lost.

Fault-tolerance. Timeouts, retries, and redundancy keep progress moving even if one fails.

This pattern is showing up in data engineering, product development, and operations. Powerdrill.ai calls swarms "intelligent middleware"—a layer between human intent and fragmented digital infrastructure, turning APIs, databases, and microservices into coherent outcomes.

From Prompting to Delegating

Here's the real transformation: prompting was a form of micro-management. You told the model exactly what to do, step by step. Delegation flips it around: you set the goal, the swarm handles the steps, and you step in only to review and steer.

That changes the cognitive load. Instead of thinking in instructions, you think in outcomes. The interface shifts too: from chat boxes to delegation surfaces—spaces where you define goals, constraints, and success criteria, while seeing the plan and progress unfold in real time.

Human-AI interaction research has long argued for this: keep humans first, design for transparency, and build systems that augment rather than replace. It's the same principle.

The Future of Work (and Why It's More Human)

The irony is clear: the more agents automate, the more human skills matter. When machines handle execution, humans focus on framing problems, setting priorities, making trade-offs, and telling the story of outcomes. Delegation doesn't erase our role—it sharpens it.

Researchers and policymakers agree: the future is less about replacement and more about rebalancing. Some tasks vanish, but new ones—like orchestrating swarms, defining guardrails, and aligning outcomes with values—become central. The key skill isn't writing clever prompts anymore. It's learning how to delegate well.

Beyond Chat Interfaces

Imagine this, you open your favorite mobile app. Maybe it's where you keep your loyalty points, maybe it's where you play quick games, or it's where you manage your digital assets. For years, you knew exactly what to expect—buttons, menus, forms. You tapped, swiped, confirmed. But now something feels different. Instead of clicking through layers of menus, the app simply asks: "What do you want to do today?" You say it out loud, or type it in a single line—and suddenly the app is no longer a set of screens. It's a partner, an agent, ready to plan, act, and deliver results on your behalf.

That shift—from structured interfaces to intent-driven ones—is changing how users behave, and with that, what they expect from design. Two recent explorations, one from Smashing Magazine on design patterns for AI interfaces, and another research paper on the "interface dilemma" in multimodal systems, both point to the same conclusion: AI is rewriting the rules of UI design.

From Tapping to Asking

For decades, apps taught us to navigate through fixed flows: tap this, scroll that, fill here. But conversational and agent-driven interfaces flip the script. Instead of navigating a structure, users declare intent—sometimes in words, sometimes in voice, sometimes by selecting from smart suggestions.

This doesn't mean prompts replace design. Quite the opposite: the new design challenge is to guide intent. Chips with examples, query builders, sliders, or even visual canvases are becoming just as important as traditional buttons. Interfaces are learning to help users say what they want instead of forcing them to learn where the app hid it.

Outputs That Act Like Answers

A text dump isn't enough anymore. If you ask an app to recommend a game, it shouldn't give you paragraphs—it should give you a ranked list, previews, maybe even a quick-play option. If you ask about loyalty points, you should see them as a timeline of transactions, not just a number. Good AI-driven design shapes the output to match the task: maps, comparisons, timelines, side-by-side versions. The UI becomes an interpreter, turning model output into something people can immediately act on.

Refinement Becomes the Flow

Here's the reality: the first answer is rarely the final one. That's why modern patterns put refinement at the center. Highlight a part of the result, tweak it, re-run only that step. Switch tones, shorten or expand, bookmark versions. Instead of starting over with a giant new prompt, you sculpt the result. For products like superapps, where users jump between tasks—payments, games, social feeds—that ability to refine quickly is what makes the AI feel natural instead of frustrating.

When Chat Steps Back

Luke Wroblewski recently described how chat itself is slowly receding from the center of AI interfaces. At first, apps gave us a chat box and a scrolling thread—type, wait, type again—until you finally got what you needed. Then came split-screen layouts, one side for conversation, the other for reviewing outputs. Better, but still clunky.

Now agents are changing the picture again. Instead of negotiating every step with an AI, you give an instruction, and the system quietly does the back-and-forth on its own—choosing tools, calling other agents, adjusting its plan—until it hands you a finished result. The chat window is still there if you want to peek inside, but by default it stays hidden.

That's an important shift. It means users won't always expect to "talk" to their software. They'll expect to start something, step aside, and come back when it's done. And that's a very different design challenge.

Agents in the Flow

We've all seen "AI tabs" added to apps, as if intelligence was just another feature. But the real change comes when AI meets users where they already are. An agent that books you a ticket, converts your loyalty points, or sets up a multiplayer match shouldn't live in a separate chatbot—it should live inside the flow you already know. As Smashing Magazine put it: be AI-second, job-first. The action is primary, the intelligence simply makes it smoother.

Multimodal Means Many Interfaces

The arXiv paper calls this the "interface dilemma". When models handle text, images, and voice at once, the "right" interface depends on context. A quick action might need a button, a complex task might need a structured plan, while voice might handle things where latency is low and hands are busy.

That's where designers face the hardest choices. You're no longer designing one interface—you're designing a system of surfaces that adapt to task, context, and device. A loyalty app might need voice on the go, rich tables at home, and lightweight previews on a smartwatch. The design isn't static, it's situational.