Stop Calling Them Dashboards
Anthropic just shipped live artifacts. most people will build the wrong thing first. Build this iinstead.
On April 20, Anthropic announced that Claude Cowork can now build live artifacts. These are dashboards + trackers that connect to your apps and files. When you reopen one, it pulls current data instead of showing you whatever was true the last time you looked (pretty handy right?). Each artifact saves to a dedicated live artifacts tab with version history, and you can pick it back up from any session or device.
Anthropic already positions Cowork as the place where Claude goes beyond answering questions and starts doing work across your local files and cloud apps, with you approving each step. Live artifacts extend that. Claude is no longer limited to helping you produce something once inside a single session. It can hold onto a working surface that you come back to.
A handoff, in this context, just means the moment where Claude finishes its part and gives the result back to you to review, approve, or act on. Live artifacts give that handoff a permanent home instead of burying it in a chat thread you have to dig through.
A lot of teams will look at this and think “oh cool, dashboards.” That framing misses the point. Claude building something prettier is not where the value sits. What actually matters is that Cowork now has a persistent place to hand work back to you when the same job needs doing again.
If you have never used Cowork, here is the short version. Cowork is a mode inside the Claude desktop app, available on macOS and Windows for all paid plans since April 9. Unlike regular Claude chat, Cowork can read your local files, connect to cloud services like Google Drive and Slack through connectors, small integrations that let Cowork talk to other apps, and run multi-step tasks on your behalf. You approve what it does along the way. It runs code in an isolated virtual machine on your computer. Think of it as the non-developer version of Claude Code, aimed at knowledge workers instead of engineers.
Live artifacts add a new layer on top of that. Before this update, Claude could help you build a deliverable inside a session. When the session ended, the output was frozen. If you needed the same thing next week with updated numbers, you had to start over, restate the context, reopen the same sources, and reassemble the whole view from scratch.
That reassembly tax is where most of the real time goes. Getting the first draft done is almost never what slows you down. Rebuilding the same packet because the underlying files moved since last time is what eats your morning. Live artifacts give you a way to skip that rebuild.
How to think about this
Drop the word “dashboard” from your mental model.
Replace it with “standing packet.”
A dashboard makes people imagine charts, monitoring screens, and generic business intelligence. Too broad. A standing packet is tighter. It means a recurring review surface with a clear owner, a job it needs to do, a known set of sources, and a moment where someone looks at it and decides what happens next.
This is much closer to how operators, founders, analysts, and consultants actually work day to day.
Your weekly operating review is a standing packet. So is a client prep brief, or a competitor research board, or a metrics summary with an explanation of what changed attached.
Forget “what cool artifact can Claude build.” Instead ask yourself which recurring packet you are already tired of rebuilding every week. Whatever answer comes to mind first is your best starting point.
Workflows that fit right now
Weekly operating review
This is the cleanest first build for most teams that have someone in an operator or chief of staff type role.
The pain is obvious. Updates live in scattered places. Notes, docs, task trackers, spreadsheets, half-finished status updates in Slack that never got cleaned up. Someone has to compress everything into a view that leadership can scan quickly. That someone does this every single week.
A live artifact can hold that weekly surface in one place. Top-line status, what moved, what slipped, blockers, decisions that need to be made, source links, and what changed since the last version.
For anyone new to this kind of work, a weekly operating review is just a short summary that tells your boss, or your team, where things stand this week compared to last week. It covers who is on track, where things slipped, and whether anything needs a decision before next week. Most companies do some version of this even if they do not call it that.
This is a much better first use case than trying to build a giant company-wide dashboard. The job is narrow, it runs on a real weekly cadence, and you usually already know who owns it.
Meeting prep surfaces
This one is strong for consultants, account leads, founders running investor meetings, and internal operators who brief leadership.
If you prep for the same kind of meeting repeatedly, and that prep always pulls from the same notes, linked files, recent docs, and account context, a live artifact can hold the current prep state without forcing you to start from zero before each meeting.
Aesthetics are beside the point here. The prep surface keeps its shape while the inputs underneath it keep changing. You open it, the latest data is there, and you spend your time reviewing instead of assembling.
If you have never done structured meeting prep, this means having a one-page brief ready before a meeting that includes who you are meeting with, what was discussed last time, any open items, and what you want to get out of this meeting. A lot of people wing it. Structured prep makes meetings shorter and more productive.
Research watchboards
This is where analysts and builder-heavy operators can get more value than most people expect.
Research almost never finishes in one sitting. Finding a single answer is rarely the hard part. Keeping the source set, your current interpretation of what you have found, and the questions you still have not answered organized over time is where it falls apart. Most people scatter this across chat logs and random notes, and when they come back to it a week later they waste 20 minutes figuring out where they left off.
A live artifact can hold active sources, current findings, contradictory evidence, unresolved questions, recent changes, and links back to the original sources for inspection. That gives your research process a durable home instead of disappearing into your chat history.
Metrics with an explanation layer
A lot of teams do not need another chart. They need a chart with a current explanation of what the chart means attached to it. Those are very different jobs.
A chart by itself is passive. You look at it and you still have to figure out what happened. A live artifact becomes useful when it combines the numbers with current anomalies, what changed since the last review, likely causes worth investigating, questions that are still open, and flags for where human judgment is needed before anyone acts on the data.
For beginners, an “anomaly” just means a number that looks different from what you would expect. If your website traffic is usually 1000 visits a day and today it was 5000, that is an anomaly. The explanation layer is the part where you, or Claude, write down why that might have happened.
This is what keeps the artifact from turning into a screen that nobody looks at.
Where people are going to get this wrong
Vanity dashboards
The first mistake will be building artifacts that exist because they feel advanced, not because they support a real recurring decision. If nobody owns it, there is no review schedule, and no one changes their behavior because of what it shows, you have built a prettier dead screen. Skip it.
Making everything live
Some outputs should stay frozen on purpose. A final memo should not silently update itself. The same goes for an approved report that already went to a client, or a recommendation that required careful human judgment to produce. Those should not quietly change because a source file got edited.
When an output is supposed to become a record, freezing it is the right move. But when it supports an ongoing rhythm where someone checks it regularly and makes decisions based on what they see, live starts to make sense.
Trusting refresh more than the source set deserves
This is where people actually get burned.
Anthropic’s own status history shows why restraint matters here. Around the broader Cowork rollout in mid-April, there were incidents. On April 16, Cowork was not starting for some users, and a fix required a desktop app update. On April 17, there were errors uploading documents to Google Drive across Claude.ai, the desktop app, and Cowork. Both were resolved, but both happened.
That does not mean the feature is unreliable. It means “refreshable” is not the same thing as “safe to trust without looking.” If your source inputs are noisy, stale, incomplete, or dependent on a connector that has reliability issues, the artifact can become a cleaner-looking failure surface. That is worse than a messy manual workflow because your confidence goes up while the ground truth underneath gets shakier.
For beginners, “source drift” means the files or data your artifact pulls from have quietly changed, gone stale, or stopped updating without anyone noticing. The artifact still looks current, but the information feeding it is not.
The filter to use before building anything
A workflow is a strong live artifact candidate when most of these are true:
The work repeats on a known schedule, weekly, biweekly, before every board meeting, etc.
You can name the bounded set of files or sources it pulls from.
The output is easier to review as a visual surface than as raw chat text.
One person clearly owns the review.
A specific decision, handoff, or operating rhythm depends on it.
The rebuild cost already annoys you enough that you have complained about it.
If most of those are false, keep the output static. You will save yourself time and avoid building something you have to babysit.
What I would build first
One weekly operating review artifact for one team.
Skip the company control center and the cross-functional mega-dashboard. One recurring review surface with a real owner.
Here is how I would structure it:
Owner: Founder, operator, or chief of staff
Cadence: Weekly
Inputs: Project notes, task exports, a metrics spreadsheet, a blockers log, and any linked decision docs
Surface: Top-line status, wins, slips, blockers, decisions needed, source links, what changed since the last version
Review point: The owner checks priorities, removes bad inferences, and decides what gets shared upward
Kill condition: If no one uses it, trust erodes, or the source drift creates more cleanup than the old manual process ever did
That is boring on purpose. Boring recurring work is where these systems start saving real time.
What this means for Cowork going forward
Live artifacts do not make Cowork magical. But they go a long way toward making it legible as a real work tool instead of a fancy chat window.
The product already had the pieces for multi-step work across files and apps. Live artifacts give Claude a stronger handoff surface for work that repeats, needs regular review, and is easier to manage as a standing object than as a one-off answer you have to regenerate. That is a narrower claim than the hype version, but it maps to how people actually work.
If you want to test this properly, skip the flashy build. Pick one recurring packet where you know who owns it, the sources are bounded, and there is an obvious review loop. Then decide whether it should stay a static deliverable or become a live artifact that earns its place by cutting the rebuild work you are already doing.
That decision is the whole game.

