Skip to main content

Authorization Model

Worka makes authorization decisions at several layers:

  • can this person open or change this workspace
  • can this AI team member invoke this tool
  • can this pack call another pack
  • can this shared view be opened by this audience
  • can this service be public

You should treat those as one model with several entry points, not as unrelated features.

The principle to keep in mind

Every action in Worka has a source and a target.

Examples:

  • a person wants to edit a workspace
  • an AI team member wants to invoke a pack tool
  • a published view wants to expose data to the public
  • a pack wants to call another pack or a host capability

The system should always be able to answer the same question:

Is this source allowed to act on this target in this way?

That is the question your authorization model needs to make predictable.

Workspace authorization

For normal product use, the most important checks are workspace checks.

They determine whether someone may:

  • open the workspace
  • change its views
  • connect a service
  • add or change AI team members
  • review or approve work
  • share or publish views

Do not treat sharing as separate from authorization. Shared views are one of the most important access paths in the platform.

Tool and pack authorization

Tool calls should not be treated as a hidden implementation detail.

When Worka invokes a tool, the platform needs to know:

  • who or what initiated the call
  • which workspace the call belongs to
  • which pack and tool are being targeted
  • whether that caller is allowed to use them

That is true whether the caller is a human action, an AI team member, or another pack.

Public and shared access

Public access should be explicit and inspectable.

If a view or service is public, you should be able to answer:

  • who made it public
  • when that happened
  • what audience it was intended for
  • whether it can perform write actions or only read data

Do not let “easy sharing” turn into “unknown exposure.”

Capability expansion

Authorization also applies when the platform is about to gain new power.

Examples include:

  • attaching a new pack
  • adding a new external connection
  • enabling persistent browser access
  • making a service or view public
  • allowing a device to contribute to the network

These are not normal content edits. They expand what the workspace can do, so they should pass through higher-trust paths and, where needed, explicit approval.

What operators should verify

A sound authorization model lets you verify all of the following:

  • a user can only manage workspaces they belong to
  • an AI team member only sees the tools and packs assigned to it
  • a pack cannot silently reach another pack it was not meant to use
  • public views and services are only public because someone chose that explicitly
  • access can be revoked cleanly without leaving orphaned capability behind

If any of those checks rely on custom logic in one feature rather than the platform model, fix the platform model.