From Context Engineering to Knowledge Engineering
The industry's focus on context engineering is the early signal of something larger — knowledge that compounds across agents, teams, and time into something organizational. We built the protocol to get there.
The moment you deploy your first agent to do real work, you discover a problem nobody warned you about.
It is not a problem of intelligence. Modern language models can reason, plan, and execute with remarkable capability. The problem is context. More precisely: how does an agent know what it needs to know, right now, for this specific task? And how does the knowledge it generates get preserved for the next agent — or the next session?
This is the problem of knowledge engineering. And until now, we have not had the right tools for it.
GITKB
GitKB is a git-like knowledge protocol for AI-driven engineering teams. It gives agents structured, persistent memory — typed documents, a traversable graph, and distributed sync semantics — so knowledge survives across sessions, agents, and time.
The Ladder of Context Compromises and Workarounds
Walk through the options available to a team deploying agents today. Every engineer working with AI today has tried at least two of these.
One-shotting — loading everything into a single prompt — works until it doesn’t. Context windows are finite. Documents are not. But beyond the raw size limit, there is a subtler problem: an agent drowning in irrelevant context performs worse than one given exactly what it needs. Attention is not free. When you pack an agent’s context with everything that might be relevant, you have already lost.
Context files inside your code repository — the .cursorrules approach, the CLAUDE.md approach — is an improvement. Now at least your context is version-controlled alongside your code. But it conflates two fundamentally different things: source code and knowledge. They evolve at different rates. They are consumed by different actors in different ways. When your context files are tangled into your codebase, every agent must clone the entire repository to access a document that might be three paragraphs long. And the moment you want to share knowledge across codebases — across teams, across organizations — you are stuck. The repository boundary becomes the knowledge boundary, and that boundary is wrong.
A separate git repository for context is one step better. Knowledge and code are now decoupled. But you have inherited all of git’s assumptions, and git’s assumptions were carefully designed for a different problem: source code developed by humans in branch-per-feature workflows, where “broken” is a binary state and merge conflicts have mechanical resolution strategies. These assumptions do not survive contact with knowledge. You don’t branch a belief. You don’t merge two competing understandings of system architecture. When you apply git’s collaboration model to prose, you get a repository that technically works and practically doesn’t.
Syncing to Google Drive, Notion, or Confluence — perhaps the most common choice for human-authored knowledge, and the most obvious failure mode for agents. These tools are optimized for human browsing. An agent doesn’t browse; it retrieves. None of these systems offer the checkout semantics that let an agent say: I need exactly these three documents, and I need them to be consistent with each other at the moment I received them. There is no commit primitive. There is no provenance. There is no graph.
Every one of these approaches is a workaround. None of them is a protocol.
FILE-FIRST
Database-backed systems build digest layers, truncated projections, and stripped fields to manufacture agent-consumable context from their own data model. File-first systems don't need that layer. The document is already the right unit.
Every one of these approaches is a workaround. None of them is a complete solution, and none are a protocol.
Knowledge Belongs Beside Code, Not Inside It
One of the sharpest lines in GitKB’s design is the separation between the knowledge base and the code repository. Tasks, incidents, specs, architectural decisions, and progress logs are all stored in the knowledge base — not in the codebase.
This is not just an organizational preference. It is a correctness property. When you store TODO comments in source files, or task lists in NOTES.md at the repository root, or architecture documents in a docs/ folder, you have entangled two things with different lifecycles, different consumers, and different maintenance rhythms. The code is owned by the build system; the knowledge is owned by the people and agents working with the system. Mixing them creates coupling that serves neither well.
git-kb enforces this separation at the protocol level. The knowledge base lives alongside the code repository — accessible from any directory in the project — but never inside it. A task document never appears in git status. It never gets accidentally committed to a feature branch, never shows up in a code review, never breaks a build. The knowledge base is invisible to git.
This separation is also what enables the distributed scale model. Because the knowledge base is its own distributed system — not a subdirectory of a code repository — it can be shared across multiple code repositories, across teams, across organizations. The same incident document can be referenced by agents working in three different codebases. The same architectural decision can inform tasks in a backend repository and a frontend repository simultaneously. The knowledge graph grows beyond any single codebase because it was never imprisoned inside one.
LOCAL-FIRST
git-kb works like git — a local client tool, completely free, no cloud account required. Full functionality offline: knowledge management, code indexing, agent workflows. Cloud sync and organization-wide sharing are available when you need them — always additive, never a prerequisite.
How you deploy is up to you. A knowledge graph can span many repositories, or sit 1:1 with a single monorepo. git-kb provides the primitives for linking your knowledge to your repositories however your organization is structured — and for restructuring those links as your organization evolves.
Agents Know How to Use It Out of the Box
A knowledge protocol is only as useful as the agents that participate in it. We have invested heavily in making GitKB natively understood by agents — without requiring humans to become experts first.
It is worth stating plainly what this means: git-kb is not built for any specific model, any specific agent harness, or any specific AI provider. It works with Claude, GPT, Gemini, local models, or whatever comes next. It works with Claude Code, Cursor, Copilot, Aider, custom agent frameworks, and bare API calls. The protocol is exposed through MCP — a standard, model-agnostic tool interface — and through a CLI that any shell-capable agent can invoke. git-kb accelerates and compounds the capabilities of whatever LLM you are running, today and in the future. It does not require you to bet on a model. It requires only that your agents can read files and run tools — which is all of them.
git-kb ships with an AGENTS.md file and a set of rules files that are automatically loaded by agents working in a git-kb-enabled repository. These files teach agents the protocol: how to check the state of the knowledge base before starting work, how to create documents before implementing solutions, how to commit progress as work unfolds, how to link commits back to tasks and incidents. The agent’s behavior is shaped by the protocol’s documentation, which lives in the repository itself.
The MCP server exposes the full knowledge base API as structured tool calls — kb_list, kb_checkout, kb_commit, kb_board, kb_semantic, kb_callers, kb_impact, and more. Agents can query the graph, retrieve documents, traverse code relationships, and commit updates without leaving their tool use loop. The CLI provides the same capabilities for humans and for shell-native agents. Skills (reusable agent prompt templates) and rules (persistent behavioral guidance) round out the integration surface, making it possible to build rich workflows where agents operate with full knowledge of your project’s conventions and architecture.
The practical consequence is that a developer who has never used git-kb can open a conversation with their agent, report a bug, and watch the agent: create an incident document, populate it with symptoms and initial investigation notes, propose a task with acceptance criteria, implement the fix, commit the code with a wikilink back to the task, update the task with completion evidence, and mark both the task and incident complete — without the developer ever needing to understand the underlying protocol. The discipline is built in. The agent carries it.
We ship deep Claude Code integration today — rules, skills, and slash commands that make git-kb feel native to the Claude Code workflow. Codex and OpenClaw integrations are in flight and landing shortly.
Language models were trained on files, commands, and streams — the non-arbitrary foundations of computer science. The best agent interfaces don’t abstract away from those primitives. They are those primitives.
MODEL AGNOSTIC
GitKB doesn't require you to bet on a model. It works with Claude, GPT, Gemini, local models, and whatever comes next — through MCP, the standard tool interface every major agent supports. You're not locked to a provider. You're not locked to an editor. If your agents can read files and run tools, they can use GitKB.
Engineering Excellence for Every Engineer
Our first release is built around a specific vision: every engineer who uses git-kb, regardless of their experience level, should operate with the systematic rigor of a world-class software engineer.
World-class engineers don’t just fix bugs — they document root causes, open incidents, track fixes, and close loops. They don’t just build features — they write specs before implementing, record the decisions they made and why, and leave the codebase more legible than they found it. They don’t just work — they create an audit trail that their teammates can follow and build on. These habits compound. Over time, codebases maintained by engineers who practice them become dramatically easier to reason about, modify, and extend.
Most engineers don’t have these habits — not because they lack the discipline, but because the tools make them inconvenient. Opening an incident document takes long enough that it doesn’t happen when a quick fix is also available. Writing an architectural decision record takes long enough that it gets deferred until the knowledge is half-forgotten. git-kb makes these practices fast enough that agents can perform them automatically, on behalf of the user, as a natural part of the conversation.
When you report a regression to your agent in a git-kb-enabled project, the agent will: create an incident document and populate it with the reported symptoms; investigate the codebase using code intelligence tools to identify a root cause; open a task linked to the incident with a proposed implementation; implement the fix with care; commit the code with a message that wikilinks back to the task; update the task with a progress log entry and references to the commits; verify the fix against the acceptance criteria; mark the task complete with completion evidence; and update the incident with a reference to the resolving task before marking it resolved. The full loop. Every time.
FOR ENGINEERING LEADERS
Every agent interaction produces a traceable audit trail. Every decision has a document. Every fix is linked to an incident, every incident to the commit that resolves it. The discipline is structural — agents maintain it automatically, without requiring engineers to change their habits. This is the kind of systematic rigor that is impossible to mandate and easy to build into a protocol.
This is not a workflow you configure. It is a workflow the protocol motivates. Because the documents are easy to create, easy to update, and naturally linked by the graph, the agent’s incentive is always to maintain them rather than shortcut them. The discipline is structural, not aspirational.
An agent that understands your system doesn't backtrack. Velocity compounds into features. Features delight users and compound into revenue.
Context Engineering Was a Beginning
The term context engineering has emerged to describe the practice of carefully curating what you put in an AI’s context window. It is an important practice. What you include shapes what the model can do. Irrelevant context pollutes; missing context blinds. Getting it right is genuinely difficult, and the community has learned a great deal about how to do it.
But context engineering, as currently practiced, is a session-level concern. You engineer the context for this prompt, in this conversation, with this agent. When the session ends, the work that went into constructing that context is gone — or preserved only if someone manually updated a document somewhere.
Knowledge engineering is the durable form. It is the practice of building and maintaining the structures that make context engineering tractable over time, at scale, across many agents and many humans working in coordination. It asks not just what does this agent need right now? but how do we build the accumulated knowledge that makes every future agent — and every future human — more capable than the last?
Context engineering operates at the scale of a session. Knowledge engineering operates at the scale of an organization’s lifetime.
Context engineering operates at the scale of a session. Knowledge engineering operates at the scale of an organization's lifetime.
The Projected Knowledge Graph
At the heart of GitKB is a concept that sets it apart from every alternative: the projected knowledge graph.
A knowledge graph is a representation of facts as nodes and the typed relationships between them as edges. The value of a graph is not in the nodes themselves — it is in the traversal. Given a bug report, you can follow edges to the relevant architectural decision that created the constraint, to the spec that described the intended behavior, to the code symbols that implement it, to the task that was opened to fix a similar issue last quarter. The graph turns isolated documents into a navigable web of understanding.
But here is the crucial design choice: in GitKB, files are the canonical data. The graph is projected from them.
Every document in a knowledge base is a Markdown file. Those files contain both structured data (YAML frontmatter declaring document type, status, relationships, and metadata) and unstructured prose (the human- and agent-readable body). Together, these two layers contain everything required to reconstruct the entire knowledge graph from scratch. There is no database that is the “real” source of truth and files that are an export. The files are the truth. The graph is derived from them, deterministically.
This is not a minor implementation detail — it is a foundational guarantee. It means your knowledge base is as durable and portable as a directory of text files. It means you can audit any edge in the graph by reading the file that declares it. It means the graph can never drift from the documents that define it, because the documents define it completely. And it means the graph can be rebuilt — always, perfectly — from the file store alone.
It also means no vendor lock-in. Your documents are files. You can pull them and take them with you, any time.
WHAT THIS MEANS
Your knowledge base is a directory of Markdown files — always. No proprietary database. No lock-in. The graph is derived from those files deterministically and can be rebuilt from them at any time. You own your knowledge, completely.
When an agent pulls three documents, it does not receive three orphaned files — it receives three nodes with all the relationship information encoded in their frontmatter and body, sufficient to understand their position in the wider graph without needing to materialize the entire graph locally.
Code Intelligence Is Native
One of the most significant things a knowledge graph can model is the relationship between understanding and implementation. In software development, these have historically lived in separate places: understanding in wikis and documents, implementation in source code. The gap between them is where most bugs live.
GitKB treats code as a first-class citizen of the knowledge graph. Documents can reference code symbols directly — functions, types, files, modules — and the graph tracks those references as typed edges. When an architectural decision document says “this pattern is implemented in src/auth/session.rs::validate_token,” that is not a text hyperlink. It is a graph edge that can be traversed, analyzed, and queried. GitKB code intelligence commands and an optional daemon maintain an index of your code, and the graph can tell you: which tasks are linked to this function? Which incidents have affected it? Which architectural decisions govern its behavior?
The token economy benefits are additional, but also substantial. Instead of dumping hundreds of lines of source code into an agent’s context to help it understand a system, you can give it a small, precise document that references the relevant symbols. The agent can resolve those references on demand — fetching only the code it actually needs, when it needs it. The knowledge graph becomes a compression layer for code intelligence, enabling agents to reason about large codebases with a fraction of the context overhead.
Beyond document references, code intelligence tools give agents the power to find all callers and callees, search symbols across the entire codebase in singular calls — in tens of milliseconds — saving numerous turns and tokens otherwise spent issuing grep and find commands across your codebase.
WHAT THIS MEANS
Your agents stop doing grep. They query the call graph — finding all callers of a function, measuring the blast radius of a change, identifying dead code — across 17 languages. Not text matching. AST-level understanding of what calls what, and what breaks if you change it.
The Protocol
A Single Branch
Git’s branch model exists to support parallel, isolated development of source code. You branch to insulate experimental changes from stable ones, then merge when confidence is high. This model makes sense when “broken” is a binary state: the code either compiles and passes tests, or it doesn’t. You need branches because a broken main branch is catastrophic.
Knowledge doesn’t work this way. A hypothesis is not “broken.” A failed experiment is not invalid — it is data. The history of what you tried and why it didn’t work is often more valuable than the current state of what does. Branches in a knowledge system would create exactly the wrong incentive: to isolate and ultimately discard the work that didn’t pan out, when that work should be remembered.
GitKB uses a single branch for each knowledge base. Knowledge grows like a tree. The root is where you started; the leaves are where you are now. The shape of the tree — its history — is the record of how understanding evolved. Old experiments don’t disappear — they recede into history, always retrievable, never in the way.
Sparse Sync and Natural Scale
In git, cloning a repository fetches the complete history of every file. This is the right tradeoff for source code, where understanding any change often requires understanding the full context of the files around it. It is the wrong tradeoff for knowledge, where an agent working on one corner of a system needs nothing about the rest — and where the total accumulated knowledge of an organization will long since have exceeded any reasonable checkout size.
GitKB’s sync protocol is sparse by default. An agent pulls the documents it needs — by path, by type, by relationship to another document — and receives exactly those, along with their relevant history. When it commits and pushes, only its changes propagate. Other agents never need to reconcile changes for documents they never pulled.
This sparsity is not merely a performance optimization. It is what gives the architecture its natural scale. A hundred agents, each working on a small, precisely scoped checkout, each with perfect consistency guarantees on that checkout, each committing to a shared knowledge base — can all operate simultaneously without coordination overhead proportional to the size of the whole. This decentralization enables choreography — something centralized orchestration frameworks cannot, while remaining fully open to centralized orchestration. The knowledge base grows continuously, organically, without any single participant needing to hold all of it or to coordinate its growth. This is the fan-out model that the agentic era requires: not a central system that agents query, but a distributed protocol that agents participate in.
Sparse Checkout — The Workspace and the File Store
The checkout model introduces a distinction that is worth making explicit: workspace files are separate from the file store.
The file store is the durable, versioned record of all documents — the canonical layer that backs the projected graph. It is managed entirely by git-kb’s commit process, and by an optional daemon if realtime efficiency is desired. Agents and humans never touch it directly.
The workspace is an ephemeral editing surface: a local directory where checked-out documents are materialized as ordinary files. An agent reads them, edits them, and stages the changes. The workspace is the interface. The file store is the database. They are separated by a minimal, well-defined boundary: the commit.
This separation matters for two reasons. First, it protects the canonical record from partial writes, corrupted edits, or abandoned sessions. An agent can crash mid-edit without leaving the knowledge base in an inconsistent state. Second, it makes the interface honest: what the agent touches is explicitly scoped to what it checked out. There is no implicit access to the rest of the knowledge base, and therefore no accidental contamination.
Agents understand files. They have always understood files — files are the most universal interface between processes and between humans and machines. When the workspace materializes documents as Markdown files, agents can apply every capability they have for reading and writing structured and unstructured content. The complexity of the underlying system — the graph, the version history, the sync protocol — is entirely hidden behind the IO boundary of the commit. git kb status and git kb diff preview exactly what a commit would change in the graph. git kb commit does the work: canonicalizing changed files into the file store and updating the graph projection from the relationships declared in those files. An optional daemon sits alongside this process, handling only realtime code intelligence indexing and embedding generation for projects that want it — the graph itself needs none of it, and all daemon operations are equally executable in real time by humans or agents if they need them.
This is a deliberate application of a principle that runs through every layer of GitKB’s design: complexity should be compartmentalized to the layer that manages it, and hidden from every layer above it. git-kb commands manage the file store and graph projection so the workspace doesn’t have to. The workspace provides a simple interface so agents don’t have to understand the protocol. The MCP and CLI provide a structured interface so agent instructions don’t have to encode protocol details. Each layer sees only what it needs to function. This is how you build systems that agents — and humans — can actually use.
Commits and Full Auditability
A commit in GitKB does what a commit in git does: it records a consistent snapshot of a set of changes, names them, and adds them to an append-only history. But the semantics are enriched for knowledge.
When you commit a set of documents, you can commit them together — atomically. A task update, a linked incident update, and a progress log entry can land in the same commit, with a single message that explains the whole operation. Future queries can retrieve the state of all three documents as of that commit, understanding them as a coherent event rather than a set of independent edits.
The commit history is the audit trail. Every change to every document — who made it, when, why, in what context — is recorded permanently. For organizations running agents at scale, this auditability is not optional. When an agent makes a decision that affects the knowledge base, you need to be able to trace it: what was the document state that led to this decision? What changed after? What commit recorded it? GitKB answers all of these questions without additional instrumentation.
FOR COMPLIANCE-CONSCIOUS TEAMS
Every agent action is committed, attributed, and permanently recorded. When an agent makes a decision, you can trace it: what document state led to it, what changed afterward, what commit recorded it. Full auditability of your AI-driven workflows — with no additional instrumentation required.
Commits also serve as the primary coordination mechanism in the multi-agent case. Because every agent’s changes flow through the commit log, the log becomes the ground truth for what happened, in what order, and why. Agents can reference commits in their documents using wikilink syntax — [[commit:repo@sha]] — creating graph edges that tie implementation decisions to the knowledge that motivated them.
How git-kb Differs from git
The parallels are intentional. GitKB borrows heavily from git’s model because git got a lot right: content-addressed storage, a cryptographically verified history, a distributed architecture with no required central server, and a commit model that makes every change explicit and attributable. These are not accidents of git’s design — they are the right foundations for any system that needs durable, verifiable, distributed state.
But git was built for source code. Where git-kb diverges, it diverges because the problem is different.
Single branch, not many. Git encourages branching as a first-class workflow. git-kb uses a single branch per knowledge base — because knowledge accumulates rather than diverges, and because failed experiments are historical data, not abandoned branches.
Sparse by default, not full clone. git clone gives you everything. git kb pull gives you exactly what you ask for. In a knowledge base that may contain tens or hundreds of thousands of documents authored over years by hundreds of contributors, the full-clone model is not just inefficient — it is the wrong conceptual model. You are not maintaining a local copy of the whole; you are participating in a document at a time. GitKB is built for orders of magnitude more, and even ephemeral, contributors.
Typed documents, not arbitrary files. git tracks any file. git-kb understands the kind of thing each file represents: a task, an incident, a spec, an architectural decision, a progress entry, a context document. Type information enables the system to enforce structure, drive workflows, and answer queries that are impossible against untyped file trees. A query like “show me all open incidents linked to tasks modified in the last week” is trivial in git-kb and impossible in git.
A projected graph, not a flat tree. git’s object model is a tree of blobs. git-kb’s object model is a graph of typed documents with typed edges. Those edges are part of the canonical record — they are declared in document frontmatter and body, projected at query time, and traversable by agents. The graph is not metadata layered on top of a file store; it is derived from the file store, deterministically.
Code intelligence as protocol, not plugin. git has no opinion about the files it tracks. git-kb actively indexes the code files in your repository, understands the call graph, and exposes that understanding as first-class edges in the knowledge graph. Code is not external to the knowledge base — it is part of it.
Sync and query, not sync alone. git’s network API is fundamentally a sync protocol: push changes up, pull changes down. That is the right model for distributing state, and git-kb inherits it. But knowledge has a second access pattern that source code does not: retrieval by meaning. An agent that needs to understand how your team handles authentication failures does not want to pull a directory — it wants to ask a question. git-kb adds a query layer alongside the sync layer, supporting full-text search, structured graph queries (give me all tasks linked to this incident), and semantic search (find documents conceptually related to this topic, even if the words don’t match). The sync API moves knowledge around. The query API makes it useful.
Where git-kb is most like git is in its philosophy: every change is explicit, every change is attributable, the history is inviolable, and no central authority is required. These properties are as important for knowledge as they are for code. We did not invent them. We applied them to a new domain.
Context Becomes Knowledge. Knowledge Yields Confidence.
There is a compound effect at work in a knowledge base that is maintained over time — and it is the deepest reason we built GitKB.
A single document in isolation is useful context. A document with five linked documents — each clarifying a different aspect of the same domain — is richer context. A document embedded in a graph of hundreds of interconnected records, accumulated over months of development, each node more precisely defined because of the ones around it, each edge more confidently drawn because of the history behind it — that is knowledge.
Context compounded over many generations of iteration, relation, and expansion becomes something qualitatively different from the sum of its parts. The graph acquires definition: terms mean what they mean in your specific domain, not in general. It acquires relatedness: every new document is contextualized by the graph it enters. It acquires expressiveness: patterns and anti-patterns are recorded, not just implied by the code.
For agents, this matters more than it might first appear. The performance of a language model on any given task is determined substantially by the quality and relevance of the context it receives. A well-maintained knowledge graph is a systematically better source of context than a collection of files — because the graph can answer questions like “what is most relevant to this task?” with a precision that no file tree can match. The agent’s ability to retrieve precisely what it needs, traverse from a task to its linked spec to the code that implements it to the incident that revealed the need — this traversal capability is what converts a collection of documents into a knowledge base.
Confidence in inference yields correctness. An agent that understands your system — its history, its decisions, its patterns, its failure modes — makes fewer wrong turns. It proposes implementations that fit the existing architecture. It avoids introducing patterns that have already been tried and abandoned. It knows which modules are stable and which are fragile. Correctness compounds into velocity: the agent that doesn’t backtrack spends more of its time moving forward. Velocity compounds into features. Features delight users and compound into revenue. And software that behaves correctly, and continues to behave correctly as it grows, compounds into user delight — the hardest thing in software to manufacture, and the most valuable.
This is what a knowledge graph does, over time, for the agents that operate against it. And this is what we mean when we say that git-kb is not merely a tool. It is infrastructure for compounding organizational intelligence.
A New Era
We believe software engineering has reached a phase transition.
For decades, the unit of production in software has been the commit: a discrete change to source code, authored by a human, reviewed by humans, deployed by automated systems. Agents now author code commits faster than humans can review them. What emerges from this shift is not just faster development; it is a fundamentally different relationship between the humans who direct software systems and the agents who implement them.
The constraint is shifting. It is no longer how quickly can humans write code? It is how can humans efficiently and effectively leverage agents to create software according to intent? And the answer to that question depends almost entirely on the quality of the knowledge infrastructure they operate against.
GitKB is our answer to that constraint. A distributed knowledge graph protocol, with sparse sync and checkout semantics, designed from first principles for the collaboration pattern we actually need: coordinated and loosely organized groups of humans and orders-of-magnitude-more agents, working on context that compounds over large time horizons into great trees of knowledge.
What git did for source code — making distributed collaboration tractable, giving every change a provenance, letting thousands of contributors work without a central bottleneck — we believe the GitKB knowledge protocol can do for the information that surrounds and shapes that code.
The seeds of knowledge are everywhere. We built the protocol to help them grow.
The teams that fall behind in this era will not be the ones that adopted AI too slowly. They will be the ones that adopted it without infrastructure — agents operating against stale context, losing knowledge across sessions, repeating investigations that were already done, introducing architectural drift because no one recorded what was already tried. These are not edge cases. They are the default when knowledge is treated as an afterthought. The compounding works in both directions: invest in knowledge infrastructure and your agents pull ahead faster with every session; don’t, and they fall further behind.
The future is already here — it’s just not evenly distributed. GitKB is the foundation we’re building so it can be.
GitKB is available as a local CLI today. We’re building the full, open, distributed protocol — cloud sync, organization-wide knowledge graphs, and multi-agent coordination at scale. Download the GitKB CLI and join our Discord to follow along.
For teams interested in deploying GitKB across your organization — join our Alpha.